Richland, WA, United States
Richland, WA, United States

Time filter

Source Type

Kennedy Z.C.,Pacific Northwest National Laboratory | Barrett C.A.,Pacific Northwest National Laboratory | Warner M.G.,Pacific Northwest National Laboratory
Langmuir | Year: 2017

Azides on the periphery of nanodiamond materials (ND) are of great utility because they have been shown to undergo Cu-catalyzed and Cu-free cycloaddition reactions with structurally diverse alkynes, affording particles tailored for applications in biology and materials science. However, current methods employed to access ND featuring azide groups typically require either harsh pretreatment procedures or multiple synthesis steps and use surface linking groups that may be susceptible to undesirable cleavage. Here we demonstrate an alternative single-step approach to producing linker-free, azide-functionalized ND. Our method was applied to low-cost, detonation-derived ND powders where surface carbonyl groups undergo silver-mediated decarboxylation and radical substitution with azide. ND with directly grafted azide groups were then treated with a variety of aliphatic, aromatic, and fluorescent alkynes to afford 1-(ND)-4-substituted-1,2,3-triazole materials under standard copper-catalyzed cycloaddition conditions. Surface modification steps were verified by characteristic infrared absorptions and elemental analyses. High loadings of triazole surface groups (up to 0.85 mmol g-1) were obtained as determined from thermogravimetric analysis. The azidation procedure disclosed is envisioned to become a valuable initial transformation in numerous future applications of ND. © 2017 American Chemical Society.


Peng B.,Pacific Northwest National Laboratory | Kowalski K.,Pacific Northwest National Laboratory
Chemical Physics Letters | Year: 2017

In this letter, we apply reverse Cuthill-McKee (RCM) algorithm to transform two-electron integral tensors to their block diagonal forms. By further applying Cholesky decomposition (CD) on each of the diagonal blocks, we are able to represent the high-dimensional two-electron integral tensors in terms of permutation matrices and low-rank Cholesky vectors. This representation facilitates low-rank factorizations of high-dimensional tensor contractions in post-Hartree-Fock calculations. Here, we discuss the second-order Møller-Plesset (MP2) method and the linear coupled-cluster model with doubles (L-CCD) as examples to demonstrate the efficiency of this technique in representing the two-electron integrals in a compact form. © 2017 Elsevier B.V.


Bingol K.,Pacific Northwest National Laboratory | Bruschweiler R.,Ohio State University
Current Opinion in Biotechnology | Year: 2017

Metabolomics continues to make rapid progress through the development of new and better methods and their applications to gain insight into the metabolism of a wide range of different biological systems from a systems biology perspective. Customization of NMR databases and search tools allows the faster and more accurate identification of known metabolites, whereas the identification of unknowns, without a need for extensive purification, requires new strategies to integrate NMR with mass spectrometry, cheminformatics, and computational methods. For some applications, the use of covalent and non-covalent attachments in the form of labeled tags or nanoparticles can significantly reduce the complexity of these tasks. © 2016 Elsevier Ltd


Zhou M.,Pacific Northwest National Laboratory | Pasa-Tolic L.,Pacific Northwest National Laboratory | Stenoien D.L.,Pacific Northwest National Laboratory
Journal of Proteome Research | Year: 2017

As histones play central roles in most chromosomal functions including regulation of DNA replication, DNA damage repair, and gene transcription, both their basic biology and their roles in disease development have been the subject of intense study. Because multiple post-translational modifications (PTMs) along the entire protein sequence are potential regulators of histones, a top-down approach, where intact proteins are analyzed, is ultimately required for complete characterization of proteoforms. However, significant challenges remain for top-down histone analysis primarily because of deficiencies in separation/resolving power and effective identification algorithms. Here we used state-of-the-art mass spectrometry and a bioinformatics workflow for targeted data analysis and visualization. The workflow uses ProMex for intact mass deconvolution, MSPathFinder as a search engine, and LcMsSpectator as a data visualization tool. When complemented with the open-modification tool TopPIC, this workflow enabled identification of novel histone PTMs including tyrosine bromination on histone H4 and H2A, H3 glutathionylation, and mapping of conventional PTMs along the entire protein for many histone subunits. © 2016 American Chemical Society.


Luo L.,Pacific Northwest National Laboratory
Nature Nanotechnology | Year: 2017

The performances of a Li–O2 battery depend on a complex interplay between the reaction mechanism at the cathode, the chemical structure and the morphology of the reaction products, and their spatial and temporal evolution; all parameters that, in turn, are dependent on the choice of the electrolyte. In an aprotic cell, for example, the discharge product, Li2O2, forms through a combination of solution and surface chemistries that results in the formation of a baffling toroidal morphology. In a solid electrolyte, neither the reaction mechanism at the cathode nor the nature of the reaction product is known. Here we report the full-cycle reaction pathway for Li–O2 batteries and show how this correlates with the morphology of the reaction products. Using aberration-corrected environmental transmission electron microscopy (TEM) under an oxygen environment, we image the product morphology evolution on a carbon nanotube (CNT) cathode of a working solid-state Li–O2 nanobattery and correlate these features with the electrochemical reaction at the electrode. We find that the oxygen-reduction reaction (ORR) on CNTs initially produces LiO2, which subsequently disproportionates into Li2O2 and O2. The release of O2 creates a hollow nanostructure with Li2O outer-shell and Li2O2 inner-shell surfaces. Our findings show that, in general, the way the released O2 is accommodated is linked to lithium-ion diffusion and electron-transport paths across both spatial and temporal scales; in turn, this interplay governs the morphology of the discharging/charging products in Li–O2 cells. © 2017 Nature Publishing Group


Tipireddy R.,Pacific Northwest National Laboratory | Kumar S.,Masdar Institute of Science and Technology
International Journal of Adhesion and Adhesives | Year: 2017

The classic problem of stress transfer in a cylinder through shear to a surrounding medium, is analyzed here in the context of pullout of an anchor under material uncertainty. Assuming a log-normal distribution for the random shear stiffness field of the adhesive, the stochastic differential equation (SDE) is formulated for spatially-tailored/degraded adhesive anchors. The stochastic shear stress distribution in the adhesive is presented for various embedment lengths and adhesive thicknesses clearly demarcating the regime over which failure would initiate. The stochastic variation of maximum shear stress of the adhesive as a function of embedment length and adhesive thickness is also presented. It is observed that the mean maximum shear stresses in the degraded adhesive for both fixed and free embedded-end cases converge and the influence of boundary condition at the embedded-end on shear stress field disappears as the embedment length is increased. For the parameters considered here, about 45% longer embedment length is required compared to an intact bondline for shear-dominated load transfer, suggesting that the design of adhesive anchors should adequately account for likely in-service damage that causes material uncertainty to avoid premature failure. © 2017 Elsevier Ltd.


Wong M.,U.S. National Center for Atmospheric Research | Ovchinnikov M.,Pacific Northwest National Laboratory
Journal of the Atmospheric Sciences | Year: 2017

A parameterization scheme is proposed for the subgrid-scale transport of hydrometeors in an assumed probability density function (PDF) scheme. Joint distributions of vertical velocity and hydrometeor mixing ratios are typically unknown, but marginal (1D) PDFs of these variables are available. The parameterization is developed using high-resolution simulations of continental and tropical deep convection. A 3D cloud-resolving model (CRM) providing benchmark solutions has a horizontal grid spacing of 250 m and employs the Morrison microphysics scheme, which treats prognostically mass and number mixing ratios for four types of precipitating hydrometeors (rain, graupel, snow, and ice) as well as cloud droplet number mixing ratio. The subgrid-scale hydrometeor transport scheme assumes input given in the form of marginal PDFs of vertical velocity and hydrometeor mixing ratios; in this study, these marginal distributions are provided by the cloud-resolving model. Conditional sampling and scaling are then applied to the marginal distributions to account for subplume correlations. The parameterized fluxes tested for four episodes of deep convection show good agreement with benchmark fluxes computed directly from the CRM output. The results demonstrate the potential use of the subgrid-scale hydrometeor transport scheme in an assumed PDF scheme to parameterize the covariances of vertical velocity and hydrometeor mixing ratios. © 2017 American Meteorological Society.


Douberly G.E.,University of Georgia | Miller R.E.,University of North Carolina at Chapel Hill | Xantheas S.S.,Pacific Northwest National Laboratory
Journal of the American Chemical Society | Year: 2017

Water clusters are formed in helium droplets via the sequential capture of monomers. One or two neon atoms are added to each droplet prior to the addition of water. The infrared spectrum of the droplet ensemble reveals several signatures of polar, water tetramer clusters having dipole moments between 2D and 3D. Comparison with ab initio computations supports the assignment of the cluster networks to noncyclic “3 + 1” clusters, which are ∼5.3 kcal/mol less stable than the global minimum nonpolar cyclic tetramer. The (H2O)3Ne + H2O ring insertion barrier is sufficiently large, such that evaporative helium cooling is capable of kinetically quenching the nonequilibrium tetramer system prior to its rearrangement to the lower energy cyclic species. To this end, the reported process results in the formation of exotic water cluster networks that are either higher in energy than the most stable gas-phase analogs or not even stable in the gas phase. © 2017 American Chemical Society.


Riechers S.L.,Pacific Northwest National Laboratory | Rosso K.M.,Pacific Northwest National Laboratory | Kerisit S.N.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2017

Mineral nucleation can be catalyzed by the presence of mineral substrates; however, the mechanisms of heterogeneous nucleation remain poorly understood. A combination of in situ time-sequenced measurements and nanomanipulation experiments were performed using atomic force microscopy (AFM) to probe the mechanisms of heteroepitaxial nucleation of otavite (CdCO3) on calcite (CaCO3) single crystals that exposed the (1014) surface. Otavite and calcite are isostructural carbonates that display a 4% lattice mismatch, based on their (1014) surface areas. AFM observations revealed a two-stage process in the nucleation of cadmium carbonate surface precipitates. As evidenced by changes in the height, shape, growth behavior, and friction signal of the precipitates, a precursor phase was observed to initially form on the surface and subsequently undergo an epitaxy-mediated phase transformation to otavite, which then grew epitaxially. Nanomanipulation experiments, in which the applied force was increased progressively until precipitates were removed from the surface, showed that adhesion of the precursor phase to the substrate was distinctively weaker than that of the epitaxial phase, consistent with that of an amorphous phase. These findings demonstrate that heterogeneous mineral nucleation can follow a nonclassical pathway like that found in homogeneous aqueous conditions. (Figure Presented). © 2017 American Chemical Society.


Palmer B.,Pacific Northwest National Laboratory
Proceedings of PAW 2016: 1st PGAS Applications Workshop - Held in conjunction with SC 2016: The International Conference for High Performance Computing, Networking, Storage and Analysis | Year: 2016

This paper will describe the application of the PGAS Global Arrays (GA) library to power grid simulations. The GridPACK™ framework has been designed to enable power grid engineers to develop parallel simulations of the power grid by providing a set of templates and libraries that encapsulate most of the details of parallel programming in higher level abstractions. The communication portions of the framework are implemented using a combination of message-passing (MPI) and one-sided communication (GA). This paper will provide a brief overview of GA and describe in detail the implementation of collective hash tables, which are used in many power grid applications to match data with a previously distributed network. © 2016 IEEE.


Fink G.A.,Pacific Northwest National Laboratory
2016 IEEE 3rd World Forum on Internet of Things, WF-IoT 2016 | Year: 2016

The growth of the Internet of Things (IoT) creates the possibility of decentralized systems of sensing and actuation, potentially on a global scale. IoT devices connected to cloud networks can offer sensing and actuation as a service enabling networks of sensors to grow to a global scale. But extremely large sensor networks can violate privacy, especially in the case where IoT devices are mobile and connected directly to the behaviors of people. The thesis of this paper is that by adapting differential privacy (adding statistically appropriate noise to query results) to groups of geographically distributed sensors privacy could be maintained without ever sending all values up to a central curator and without compromising the overall accuracy of the data collected. This paper outlines such a scheme and performs an analysis of differential privacy techniques adapted to edge computing in a simulated sensor network where ground truth is known. The positive and negative outcomes of employing differential privacy in distributed networks of devices are discussed and a brief research agenda is presented. © 2016 IEEE.


Romero-Gomez P.,Pacific Northwest National Laboratory | Richmond M.C.,Pacific Northwest National Laboratory
Journal of Hydraulic Research | Year: 2017

Studies of the stress/survival of migratory fish during downstream passage through operating hydro-turbines are normally conducted to determine the fish-friendliness of the hydro-turbine units. This study applies a modelling strategy based on flow simulations using computational fluid dynamics and Lagrangian particle tracking to represent the travel of live fish and autonomous sensor devices through hydro-turbine intakes. For the flow field calculation, the simulations were conducted using a Reynolds-averaged Navier–Stokes turbulence model and an eddy-resolving technique. For the particle-tracking calculation, different modelling assumptions for turbulence forcing, mass formulation, buoyancy, and release conditions were tested. The modelling assumptions are evaluated with respect to datasets collected using a laboratory physical model and an autonomous sensor device deployed at Ice Harbor Dam (Snake River, State of Washington, USA) at the same discharge and release point modelled in the present work. We found acceptable agreement between the simulated results and observed data and discuss relevant features of Lagrangian particle movement that are critical in turbine design and in the experimental design of field studies. © 2017 International Association for Hydro-Environment Engineering and Research


Bullock R.M.,Pacific Northwest National Laboratory | Das A.K.,Pacific Northwest National Laboratory | Appel A.M.,Pacific Northwest National Laboratory
Chemistry - A European Journal | Year: 2017

Electrocatalysts are critically important for a secure energy future, as they facilitate the conversion between electrical and chemical energy. Molecular catalysts offer precise control of structure that enables understanding of structure-reactivity relationships, which can be difficult to achieve with heterogeneous catalysts. Molecular electrocatalysts can be immobilized on surfaces by covalent bonds or through non-covalent interactions. Advantages of surface immobilization include the need for less catalyst, avoidance of bimolecular decomposition pathways, and easier determination of catalyst lifetime. This Minireview highlights surface immobilization of molecular electrocatalysts for reduction of O2, oxidation of H2O, production of H2, and reduction of CO2. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.


High performance computing researcher Shuaiwen Leon Song tests a new architecture on rendering graphics for Doom 3, though the research ultimately benefits science that uses visualizations, models and even virtual reality. Credit: Eric Francavilla / Pacific Northwest National Laboratory When Shuaiwen Leon Song boots up Doom 3 and Half-life 2, he does so in the name of science. Song studies high performance computing at Pacific Northwest National Laboratory, with the goal of making computers smaller, faster and more energy efficient. A more powerful computer, simply put, can solve greater scientific challenges. Like modeling complex chemical reactions or monitoring the electric power grid. The jump from supercomputers to video games began when Song asked if hardware called 3D stacked memory could do something it was never designed to do: help render 3D graphics. 3D rendering has advanced science with visualizations, models and even virtual reality. It's also the stuff of video games. "We're pushing the boundaries of what hardware can do," Song said. "And though we tested our idea on video games, this improvement ultimately benefits science." Song collaborated with researchers from the University of Houston to develop a new architecture for 3D stacked memory that increases 3D rendering speeds up to 65 percent. The researchers exploited the hardware's feature called "processing in memory," the results of which they presented at the 2017 IEEE Symposium on High Performance Computer Architecture, or HPCA. A normal graphics card uses a graphics processing unit, or GPU, to create images from data stored on memory. 3D stacked memory has an added logic layer that allows for the memory to do some processing too—hence the name "processing in memory." This essentially reduces the data that has to travel from memory to GPU cores. And like an open highway, less traffic means faster speeds. The researchers found the last step in rendering—called anisotropic filtering—creates the most traffic. So by moving anisotropic filtering to the first step in the pipeline, and performing that process in memory, the researchers found the greatest performance boost. Song tested the architecture on popular games such as Doom 3 and Half-life 2. Virtual aliens and demons aside, this research is not so different than Song's other work. For example, Song is exploring how high performance computers can model changing networks of information, and how to predict changes in these graphs. With research questions like these, Song means to push the boundaries of what computers can do.


News Article | April 24, 2017
Site: www.eurekalert.org

(DOE/Pacific Northwest National Laboratory) Pacific Northwest National Laboratory is collaborating with three small businesses to address technical challenges concerning hydrogen for fuel cell cars, bio-coal and nanomaterial manufacturing.


News Article | April 20, 2017
Site: www.eurekalert.org

RICHLAND, Wash. - When Shuaiwen Leon Song boots up Doom 3 and Half-life 2, he does so in the name of science. Song studies high performance computing at Pacific Northwest National Laboratory, with the goal of making computers smaller, faster and more energy efficient. A more powerful computer, simply put, can solve greater scientific challenges. Like modeling complex chemical reactions or monitoring the electric power grid. The jump from supercomputers to video games began when Song asked if hardware called 3D stacked memory could do something it was never designed to do: help render 3D graphics. 3D rendering has advanced science with visualizations, models and even virtual reality. It's also the stuff of video games. "We're pushing the boundaries of what hardware can do," Song said. "And though we tested our idea on video games, this improvement ultimately benefits science." Song collaborated with researchers from the University of Houston to develop a new architecture for 3D stacked memory that increases 3D rendering speeds up to 65 percent. The researchers exploited the hardware's feature called "processing in memory," the results of which they presented at the 2017 IEEE Symposium on High Performance Computer Architecture, or HPCA. A normal graphics card uses a graphics processing unit, or GPU, to create images from data stored on memory. 3D stacked memory has an added logic layer that allows for the memory to do some processing too -- hence the name "processing in memory." This essentially reduces the data that has to travel from memory to GPU cores. And like an open highway, less traffic means faster speeds. The researchers found the last step in rendering -- called anisotropic filtering -- creates the most traffic. So by moving anisotropic filtering to the first step in the pipeline, and performing that process in memory, the researchers found the greatest performance boost. Song tested the architecture on popular games such as Doom 3 and Half-life 2. Virtual aliens and demons aside, this research is not so different than Song's other work. For example, Song is exploring how high performance computers can model changing networks of information, and how to predict changes in these graphs. With research questions like these, Song means to push the boundaries of what computers can do. The work was funded by DOE's Office of Science. Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, Instagram, LinkedIn and Twitter.


News Article | May 2, 2017
Site: www.eurekalert.org

Oak Ridge National Laboratory scientists designed software for St. Jude Children's Research Hospital that significantly sped processing of microscopy images used in brain development research. The software provided frame-by-frame analysis of video taken of a mouse brain cell in a matter of hours compared with traditional manual techniques that can take weeks. St. Jude's researchers are exploring how neurons migrate in the developing brain to better understand how defects in the process may result in certain disorders such as epilepsy or intellectual disability. "Automatic processing of the images saves time and provides more data for better statistics in these studies," said ORNL's Ryan Kerekes. Dr. David Solecki of St. Jude's Developmental Neurobiology Department said his collaboration with ORNL has been "indispensable." The ORNL tools "free up biologists to do what they do best: apply their expert knowledge to analysis tasks," he said. Results of the research were published in Nature Communications. [Contact: Stephanie Seay, (865) 576-9894; seaysg@ornl.gov] Caption: A still image processed using ORNL software tracks a migrating neuron within a mouse's brain cell. A bottleneck to breaking down woody plants for use in biofuels or other products may occur at the plant cell wall's surface, according to a new Oak Ridge National Laboratory study. Researchers exposed samples of non-pretreated poplar to a microorganism called Clostridium thermocellum. The team found that the breakdown of carbohydrates during microbial fermentation ceased prematurely in the secondary cell wall, when the plant's sugar material was only about 30 percent processed. "The surface quickly goes through significant changes and becomes non-productive for further degradation by enzymes even as 70 percent of usable plant sugars are still trapped in the cell wall structure," said ORNL's Alexandru Dumitrache. While further research is required to resolve the holdup, results of the study published in Green Chemistry for the first time quantified the changes in the surface chemistry. [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov] Caption: ORNL researchers used sophisticated laser scanning techniques to compare the breakdown of fermented popular (B) compared with unfermented popular (A), as they quantified, for the first time, chemical changes in the cell wall's surface. Predicting how ecosystems might respond to environmental change could become more precise thanks to a new method known as a process sensitivity index developed by Oak Ridge National Laboratory, Florida State University and Pacific Northwest National Laboratory. Scientists use simulations to predict how a range of environmental changes might affect forests, grasslands, hydrology and other ecosystems. But because these complex models represent many sub-processes, they can produce a wide range of predictions. The process sensitivity index can scan existing computational models and identify the processes that cause the most uncertainty, suggesting where further research will provide the largest benefit. The researchers demonstrated their approach on groundwater models but note that the index can be applied to any modeled system. Results of the study were published in Water Resources Research. [Contact: Sara Shoemaker, (865) 576-9219; shoemakerms@ornl.gov] Caption: This diagram represents 11 ecosystem models used to predict how environmental changes might affect processes such as canopy layering, vegetation, soil layering, roots; carbon (C), water (W) and nitrogen (N) cycles and energy balance (E). Scientists developed a new process sensitivity index method aimed to address model uncertainty and make more precise predictions. By taking advantage of a phenomenon known as "quantum mechanical squeezing," researchers have conceptually designed a new method of applying atomic force microscopy. Ali Passian of Oak Ridge National Laboratory and George Siopsis of the University of Tennessee introduced a novel method of making measurements in a paper published in Physical Review A. The technique, which they named quantum atomic force microscopy, has the potential to significantly increase the resolution of AFM. The method takes advantage of the fine interactions between the probe of the microscope and the surface of the sample to find the "sweet spot" where quantum effects stabilize the probe, resulting in more sensitive measurements. "That's the theoretical prediction of this effect," Passian said. "Of course, the experiments will have the final say on how much better we can do, but the basic concept and theory are viable." [Contact: Jim Pearce, (865) 241-2427; pearcejw@ornl.gov] Caption: When two objects approach each other, an interfacial interaction force becomes significant. By using this force, one may utilize quantum effects to advantageously control the motion of the probe.


News Article | April 27, 2017
Site: www.rdmag.com

Today marks the 31st anniversary of the catastrophic explosion at the Chernobyl Nuclear Power Plant's Unit 4 reactor. The blast discharged 400 times the radioactivity released by the Hiroshima bomb and drove nearly 200,000 people from their homes near the plant in Ukraine. Now, the hastily built sarcophagus used to temporarily contain what remained of the reactor's hull after the meltdown has been permanently entombed. A massive steel arch was built, and in 2016, slid over the sarcophagus where it is expected to safely and securely contain the radioactive debris for 100 years. In the early 1990s, Battelle, operator of the Department of Energy's Pacific Northwest National Laboratory, was part of an international consortium looking at the long-term safety and containment of Unit 4 at Chernobyl. Through 2014, Battelle researchers at PNNL applied their expertise in nuclear science, safety, remediation and engineering to help Ukrainians. Among their many contributions, researchers led the early designs for the arch steel structure called the New Safe Confinement. The effort was billed as the world's largest moveable structure — 843 feet across, 355 feet high and 492 feet in length. That's roughly the size of two Manhattan blocks and tall enough to enclose the Statue of Liberty. Though Battelle withdrew from the project in 2014, a few Battelle researchers remained "on loan" to Bechtel at Slavutych to oversee construction and movement of the NSC to its final destination. Construction of the nearly 40,000-ton structure began in 2010, and it was delicately moved in November 2016 over the sarcophagus. Battelle's Andrei Glukhov, who was a reactor operator at the Chernobyl Nuclear Power Plant when the catastrophe occurred, was among those who remained. Glukhov and other researchers recently returned to PNNL. But from 1994 through 2014, more than 200 employees contributed to help improve safety at the Chernobyl site. Several researchers uprooted entire families, relocating them from the Tri-Cities to Slavutych to be closer to where the solutions were needed. In addition to contributing scientific research and engineering, they introduced to Slavutych one of the U.S.'s favorite games — baseball.


« UC Riverside team fabricates nanosilicon anodes for Li-ion batteries from waste glass bottles | Main | Oil sands production accounted for 28.8% of total Canadian gas demand in 2016 » The US Department of Energy’s (DOE) Small Business Vouchers (SBV) Pilot has selected eight DOE national labs for collaborations with 38 small businesses in the third round of funding. Among these are two projects in the fuel cells area and four projects in the vehicle area. Other projects address advanced manufacturing, bioenergy, buildings, geothermal, solar, water and wind technologies. In the first two rounds of the program, 12 DOE national labs received funding to partner with 76 small businesses. With the latest announcement, SBV will have awarded approximately $22 million to support partnerships between 114 US small businesses and the national labs. Hawaii Hydrogen Carriers. Savannah River National Laboratory has been awarded $300,000 to work with Hawaii Hydrogen Carriers to perform analysis on the performance and design of low pressure hydrogen storage systems to power mobile applications of Proton Exchange Membrane hydrogen fuel cells. Using SRNL’s unique modeling and system testing capabilities for metal hydride-based systems will help provide potential partners with realistic performance and cost estimates. Emerald Energy NW. Pacific Northwest National Laboratory has been awarded $160,000 to work with Emerald Energy NW, LLC to fabricate and test a low-friction, low-loss, versatile rotary magnetic wheel seal test apparatus in collaboration with the PNNL magnetic liquefier team. This project could result in the design of a breakthrough rotary wheel to allow for a more rapid transition to cleaner, domestic, and less expensive gaseous fuels for the transportation sector. Efficient Drivetrains. The National Renewable Energy Laboratory has been awarded $140,000 to work with Efficient Drivetrains to test a lightweight, plug-in hybrid electric vehicle (PHEV) powertrain. This project will help get the first heavy-duty Class 6 vehicle to the commercial market, offering consumers an option that provides significant fuel economy without limiting driving range or fuel options. Phinix. Argonne National Laboratory has been awarded $300,000 to work with Phinix, LLC to validate and scale up a method for extracting magnesium from magnesium aluminide scrap metal alloys. This energy, environmental, and cost-efficient method of sourcing magnesium has the potential to reduce the amount of magnesium—the third most commonly used structural metal—needed to import from foreign countries. Precision Polyolefins. Argonne National Laboratory has been awarded $180,000 to work with Precision Polyolefins LLC, of College Park, MD, to test its new technology that converts inexpensive and abundant feedstocks derived from natural gas into synthetic oils for use in auto lubricant. This project could potentially improve fuel economy by up to 0.5%, as well as have applications beyond vehicles, such as for industrial gear oils and wind turbine gear oils. Advano. Argonne National Laboratory has been awarded $180,000 to work with Advano to develop functionalized silicon nanoparticles, which are used in the growing demand for lithium-ion batteries. By partnering with ANL, this project seeks to lower the cost of silicon nanoparticles which could significantly increase the specific energy of lithium-ion batteries. In the advanced manufacturing area, the National Renewable Energy Laboratory has been awarded $70,000 to work with BASiC 3C, Inc. towards developing a new semiconductor as a replacement for silicon. This would provide greater efficiency, voltage capability, temperature operation, and higher tolerance to harsh operating conditions than existing models. The project will work with NREL to identify any remaining impurities in the current model, with the goal of disrupting the silicon power switch industry, currently a $12B market. Additionally, according to Toyota, development of a semiconductor material will increase the range of electric vehicles by 10%.


News Article | April 17, 2017
Site: cen.acs.org

Capturing and using carbon dioxide produced by power plants and other sources could help meet climate emissions goals. In theory, CO is easy to capture. Because it’s acidic, it reacts readily with simple bases such as amines. But in practice, amine scrubbing, the CO -capture method used by some power plants to clean flue gases, gets bogged down because it traps the greenhouse gas in water-based solutions. Heating these large amounts of water to release the captured CO and regenerate the amines requires a great deal of energy. To address this challenge, Fuyuhiko Inagaki and his research team at Kanazawa University report a family of amines that absorb CO but not water, potentially reducing the amount of energy needed to run the scrubbing process (J. Am. Chem. Soc. 2017, DOI: 10.1021/jacs.7b01049). Inagaki and colleagues, who are medicinal chemists, developed the amines while trying to create a dry stream of CO from air for use as a building block for pharmaceutical synthesis. Amines absorb water in addition to CO , yielding “wet” CO , which can interfere in chemical reactions and corrode pipes. The researchers hypothesized that adding hydrophobic side groups, such as phenyls, onto amines used to capture CO could help reduce water absorption. So they tested the CO absorption ability of a variety of benzene-linked amines, ranging from simple benzylamine to xylylenediamines. Then the group exposed the best-performing amines to open air for two weeks to determine the amount of water absorbed relative to CO . The researchers discovered that the xylylenediamines were the most promising: They absorbed no water at all. In comparison, monoethanolamine, the standard compound used for amine scrubbing, absorbed three molecules of water for every molecule of CO . When the researchers dissolved the best-performing xylylenediamine in water and exposed the solution to air, the xylylenediamine-CO product precipitated as a white solid that contained no water. Investigating the solid’s structure revealed a bilayer with the CO molecules sandwiched between sheets on the inside and the hydrophobic phenyl groups interacting with adjacent sheets on the outside. “One part of the molecule keeps the water away, while the other part is just right for capturing carbon dioxide,” says Vanda A. Glezakou of Pacific Northwest National Laboratory, who was not involved in the study. The researchers released the CO by heating the solid product to 120 °C and showed that it could be used directly in the notoriously moisture-sensitive Grignard reaction. Inagaki and his team are now testing the xylylenediamine’s stability and exploring the feasibility of preparing it on a large scale for real-world applications in collaboration with industry and government agencies in Japan.


Now, the hastily built sarcophagus used to temporarily contain what remained of the reactor's hull after the meltdown has been permanently entombed. A massive steel arch was built, and in 2016, slid over the sarcophagus where it is expected to safely and securely contain the radioactive debris for 100 years. In the early 1990s, Battelle, operator of the Department of Energy's Pacific Northwest National Laboratory, was part of an international consortium looking at the long-term safety and containment of Unit 4 at Chernobyl. Through 2014, Battelle researchers at PNNL applied their expertise in nuclear science, safety, remediation and engineering to help Ukrainians. Among their many contributions, researchers led the early designs for the arch steel structure called the New Safe Confinement. The effort was billed as the world's largest moveable structure—843 feet across, 355 feet high and 492 feet in length. That's roughly the size of two Manhattan blocks and tall enough to enclose the Statue of Liberty. Though Battelle withdrew from the project in 2014, a few Battelle researchers remained "on loan" to Bechtel at Slavutych to oversee construction and movement of the NSC to its final destination. Construction of the nearly 40,000-ton structure began in 2010, and it was delicately moved in November 2016 over the sarcophagus. Battelle's Andrei Glukhov, who was a reactor operator at the Chernobyl Nuclear Power Plant when the catastrophe occurred, was among those who remained. Glukhov and other researchers recently returned to PNNL. But from 1994 through 2014, more than 200 employees contributed to help improve safety at the Chernobyl site. Several researchers uprooted entire families, relocating them from the Tri-Cities to Slavutych to be closer to where the solutions were needed. In addition to contributing scientific research and engineering, they introduced to Slavutych one of the U.S.'s favorite games—baseball. Explore further: Ukraine moves giant new safety dome over Chernobyl


Silky chocolate, a better medical drug, or solar panels all require the same thing: just the right crystals making up the material. Now, scientists trying to understand the paths crystals take as they form have been able to influence that path by modifying the starting ingredient. The insights gained from the results, reported April 17 in Nature Materials, could eventually help scientists better control the design of a variety of products for energy or medical technologies. "The findings address an ongoing debate about crystallization pathways," said materials scientist Jim De Yoreo at the Department of Energy's Pacific Northwest National Laboratory and the University of Washington. "They imply you can control the various stages of materials assembly by carefully choosing the structure of your starting molecules." One of the simplest crystals, diamonds are composed of one atom -- carbon. But in the living world, crystals, like the ones formed by cocoa butter in chocolate or ill-formed ones that cause sickle cell anemia, are made from molecules that are long and floppy and contain a lengthy well-defined sequence of many atoms. They can crystallize in a variety of ways, but only one way is the best. In pharmaceuticals, the difference can mean a drug that works versus one that doesn't. Chemists don't yet have enough control over crystallization to ensure the best form, partly because chemists aren't sure how the earliest steps in crystallization happen. A particular debate has focused on whether complex molecules can assemble directly, with one molecule attaching to another, like adding one playing card at a time to a deck. They call this a one-step process, the mathematical rules for which scientists have long understood. The other side of the debate argues that crystals require two steps to form. Experiments suggest that the beginning molecules first form a disordered clump and then, from within that group, start rearranging into a crystal, as if the cards have to be mixed into a pile first before they could form a deck. De Yoreo and his colleagues wanted to determine if crystallization always required the disordered step, and if not, why not. To do so, the scientists formed crystals from a somewhat simplified version of the sequence-defined molecules found in nature, a version they call a peptoid. The peptoid was not complicated -- just a string of two repeating chemical subunits (think "ABABAB") -- yet complex because it was a dozen subunits long. Based on its symmetrical chemical nature, the team expected multiple molecules to come together into a larger structure, as if they were Lego blocks snapping together. In a second series of experiments, they wanted to test how a slightly more complicated molecule assembled. So, the team added a molecule onto the initial ABABAB... sequence that stuck out like a tail. The tails attracted each other, and the team expected their association would cause the new molecules to clump. But they weren't sure what would happen afterwards. The researchers put the peptoid molecules into solutions to let them crystallize. Then the team used a variety of analytical techniques to see what shapes the peptoids made and how fast. It turns out the two peptoids formed crystals in very different fashions. As the scientists mostly expected, the simpler peptoid formed initial crystals a few nanometers in size that grew longer and taller as more of the peptoid molecules snapped into place. The simple peptoid followed all the rules of a one-step crystallization process. But thrusting the tail into the mix disrupted the calm, causing a complex set of events to take place before the crystals appeared. Overall, the team showed that this more complicated peptoid first clumped together into small clusters unseen with the simpler molecules. Some of these clusters settled onto the available surface, where they sat unchanging before suddenly converting into crystals and eventually growing into the same crystals seen with the simple peptoid. This behavior was something new and required a different mathematical model to describe it, according to the researchers. Understanding the new rules will allow researchers to determine the best way to crystallize molecules. "We were not expecting that such a minor change makes the peptoids behave this way," said De Yoreo. "The results are making us think about the system in a new way, which we believe will lead to more predictive control over the design and assembly of biomimetic materials."


News Article | April 17, 2017
Site: www.eurekalert.org

IMAGE:  A small change to a peptoid that crystallizes in one step (left) sends the modified peptoid down a more complicated path from disordered clump to crystal (right). view more RICHLAND, Wash. -- Silky chocolate, a better medical drug, or solar panels all require the same thing: just the right crystals making up the material. Now, scientists trying to understand the paths crystals take as they form have been able to influence that path by modifying the starting ingredient. The insights gained from the results, reported April 17 in Nature Materials, could eventually help scientists better control the design of a variety of products for energy or medical technologies. "The findings address an ongoing debate about crystallization pathways," said materials scientist Jim De Yoreo at the Department of Energy's Pacific Northwest National Laboratory and the University of Washington. "They imply you can control the various stages of materials assembly by carefully choosing the structure of your starting molecules." One of the simplest crystals, diamonds are composed of one atom -- carbon. But in the living world, crystals, like the ones formed by cocoa butter in chocolate or ill-formed ones that cause sickle cell anemia, are made from molecules that are long and floppy and contain a lengthy well-defined sequence of many atoms. They can crystallize in a variety of ways, but only one way is the best. In pharmaceuticals, the difference can mean a drug that works versus one that doesn't. Chemists don't yet have enough control over crystallization to ensure the best form, partly because chemists aren't sure how the earliest steps in crystallization happen. A particular debate has focused on whether complex molecules can assemble directly, with one molecule attaching to another, like adding one playing card at a time to a deck. They call this a one-step process, the mathematical rules for which scientists have long understood. The other side of the debate argues that crystals require two steps to form. Experiments suggest that the beginning molecules first form a disordered clump and then, from within that group, start rearranging into a crystal, as if the cards have to be mixed into a pile first before they could form a deck. De Yoreo and his colleagues wanted to determine if crystallization always required the disordered step, and if not, why not. To do so, the scientists formed crystals from a somewhat simplified version of the sequence-defined molecules found in nature, a version they call a peptoid. The peptoid was not complicated -- just a string of two repeating chemical subunits (think "ABABAB") -- yet complex because it was a dozen subunits long. Based on its symmetrical chemical nature, the team expected multiple molecules to come together into a larger structure, as if they were Lego blocks snapping together. In a second series of experiments, they wanted to test how a slightly more complicated molecule assembled. So, the team added a molecule onto the initial ABABAB... sequence that stuck out like a tail. The tails attracted each other, and the team expected their association would cause the new molecules to clump. But they weren't sure what would happen afterwards. The researchers put the peptoid molecules into solutions to let them crystallize. Then the team used a variety of analytical techniques to see what shapes the peptoids made and how fast. It turns out the two peptoids formed crystals in very different fashions. As the scientists mostly expected, the simpler peptoid formed initial crystals a few nanometers in size that grew longer and taller as more of the peptoid molecules snapped into place. The simple peptoid followed all the rules of a one-step crystallization process. But thrusting the tail into the mix disrupted the calm, causing a complex set of events to take place before the crystals appeared. Overall, the team showed that this more complicated peptoid first clumped together into small clusters unseen with the simpler molecules. Some of these clusters settled onto the available surface, where they sat unchanging before suddenly converting into crystals and eventually growing into the same crystals seen with the simple peptoid. This behavior was something new and required a different mathematical model to describe it, according to the researchers. Understanding the new rules will allow researchers to determine the best way to crystallize molecules. "We were not expecting that such a minor change makes the peptoids behave this way," said De Yoreo. "The results are making us think about the system in a new way, which we believe will lead to more predictive control over the design and assembly of biomimetic materials." This work was supported by the Department of Energy Office of Science and PNNL's Laboratory Directed Research and Development program. Reference: Xiang Ma, Shuai Zhang, Fang Jiao, Christina Newcomb, Yuliang Zhang, Arushi Prakash, Zhihao Liao, Marcel Baer, Christopher Mundy, Jim Pfaendtner, Aleksandr Noy, Chun-Long Chen and Jim De Yoreo, Tuning crystallization pathways through sequence-engineering of biomimetic polymers. Nature Materials April 17, 2017 DOI: 10.1038/nmat4891 (In press.) Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter.


News Article | May 3, 2017
Site: www.scientificcomputing.com

When a molecule of water comes in for a landing on the common catalyst titanium oxide, it sometimes breaks up and forms a pair of molecule fragments known as hydroxyls. But scientists had not been able to show how often the breakup happened. Now, researchers have determined that water is only slightly more likely to stay in one piece as it binds to the catalyst surface than it is to form the hydroxyl pairs. The result -- water's advantage is so small -- might surprise some chemists. But understanding that small advantage has wide-ranging significance for a variety of potential applications in industries that use titanium dioxide. These industries include alternative fuel production, solar energy and food safety, and even self-cleaning windows. It will also help scientists better understand how acids behave and expand their knowledge of how molecules split. "How water binds was the big question," said chemist Zdenek Dohnalek at the Department of Energy's Pacific Northwest National Laboratory. "Chemists had mixed information from a lot of different methods, and theorists also had ideas. Using a unique combination of instruments, we've finally solved it." The team reported the work in the Proceedings of the National Academy of Sciences. Even though many industries use titanium oxide to help speed up chemical reactions, scientists have not uncovered all of its secrets. A key mystery, researchers have long debated, is the way in which water interacts with titanium oxide. The interaction is important in its own right to split water, but it also influences the course of many reactions in general. On titanium oxide's surface, molecules of water switch between being intact and splitting into hydroxyls. Even though there are many different ways of measuring the ratio of intact water to hydroxyls at any given time, scientists have not been able to nail it down for decades. To explore the problem, PNNL researchers combined different tools in a new way. They sent beams of water at various speeds onto cold titanium oxide sitting under a very high resolution microscope known as a scanning tunneling microscope. The microscope let them visualize the catalyst's titanium and oxygen atoms. The atoms appear as bright and dark rows, like a cornfield with tall rows of corn alternating with ditches, and individual molecules of water appear as bright spots that don't align with the rows. In addition to viewing water molecules as they hit the surface, the team simulated details of the atoms interacting in exacting detail on a high performance computer. Combining experiments and simulations allowed the team to settle the long-standing debate. Shaped like a V, a water molecule has a fat oxygen atom in the middle bound to two smaller hydrogen atoms on either side. Titanium oxide helps break the bonds between the atoms to push a chemical reaction forward: the titanium atoms trap water molecules, while nearby oxygens, also part of the catalyst surface, draws away then captures one of the hydrogen atoms. When this happens, two hydroxyls are formed, one from a surface oxygen combining with the hydrogen and the other leftover from the water molecule. The scientists needed to know how often the hydroxyls formed. Do water molecules largely stay intact on the surface? Or do they immediately convert to hydroxyls? How likely water will stay intact on titanium oxide -- and how easily the hydroxyls reform into water -- sets the stage for other chemical reactions. To find out, the chemists had to develop technologies to measure how often the hydroxyls arose on the surface. Using resources developed within EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science User Facility at PNNL, they shot a beam of water molecules at a titanium oxide surface at low energy -- the beam shooting slowly, and at high energy -- moving fast like out of a firehose. They ended up with bright spots on the surface, and the higher the energy, the more spots. But the spots did not look bright enough to include both hydroxyls, as expected, so they performed additional experiments to determine what the spots were. The team shot water at the titanium dioxide surface and then froze the water in place. Then they slowly warmed everything up. Raising the temperature revealed the spots -- which they thought were at least one hydroxyl -- changing into water molecules. This meant that each spot had to actually be a pair of hydroxyls because the evidence showed that all the raw materials needed to make a water molecule were sitting there, and both hydroxyls were needed. They performed various other experiments to determine the temperature at which a landing water molecule converts into hydroxyl pairs and vice versa. From that they learned that water is only slightly more stable than the hydroxyl pairs on the surface -- 10 percent more, if we go by the amount of energy it takes to disrupt them. Simulating the water landings on a high performance computer, also at EMSL, the researchers found out the only water molecules that stuck to the catalyst were ones that landed in a figurative ditch within a cornfield, where the water's oxygen faced a titanium atom down in the ditch. If the water came in with just the right speed, the water reoriented and docked one of its hydrogens towards a nearby oxygen, forming the hydroxyl pairs seen in the experiments. If not, the water molecule just bounced off. "We discovered that electrostatics -- the same static that makes sparks when you rub your feet on the carpet -- helped steer the water molecules onto the surface," said theoretical chemist and coauthor Roger Rousseau. All of these details will help researchers understand catalysis better and improve our understanding of chemical reactions. In addition, the results reveal a value that scientists have long tried to nail down -- how easy or hard it is for water to lose a hydrogen on titanium oxide.


News Article | May 8, 2017
Site: www.eurekalert.org

Study over Eastern North Atlantic aims to clear up understanding of how aerosols affect low marine clouds and improve assessments of their impact on Earth's climate IMAGE:  Using an aircraft outfitted with 55 atmospheric instrument systems, scientists will traverse horizontal tracks above and through clouds and spiral down through atmospheric layers to provide detailed, on-site measurements of... view more UPTON, NY--For an intensive period throughout June and July, and again next January and February, scientists from U.S. Department of Energy (DOE) laboratories will take to the skies in a specially outfitted aircraft to gather data on the properties of aerosols and clouds above the Eastern North Atlantic. They'll traverse horizontal tracks above and through clouds and spiral down through atmospheric layers to gather samples and measurements that add fine-scale detail to 3D cloud profiles constructed by nearby instruments on the ground. This synergistic sampling from throughout the clouds and below will improve scientists' understanding of how aerosol properties affect low marine boundary layer clouds, and help improve the representations of clouds and aerosols in models of Earth's climate. "Low level clouds are abundant and can have a big influence on global climate by affecting how much solar energy is reflected back into space," said Jian Wang, an atmospheric scientist at DOE's Brookhaven National Laboratory and lead researcher on the study. "One of the biggest uncertainties in Earth system models is our understanding of how clouds respond to changes in the properties of aerosols, tiny particles wafted into the atmosphere as salty sea spray or emitted from distant industrial sources." To improve that understanding, DOE's Atmospheric Radiation Measurement (ARM) Climate Research Facility has conducted a variety of aerosol studies. They recently installed a fixed ground station with radars and other meteorological instruments on Graciosa Island in the Azores west of Portugal. This site operates 24/7 collecting data on aerosols, trace gases, atmospheric thermodynamics, and other factors that influence clouds and climate in the Eastern North Atlantic--an area influenced by a variety of aerosol sources where persistent low marine boundary layer clouds are highly susceptible to changes in aerosol particle properties. The new study will supplement the year-round, ground-based dataset with a series of airborne sampling missions--approximately 20 in the summer and 20 in the winter--flown aboard ARM's Gulfstream-159 (G-1) research aircraft. The G-1 is the centerpiece of the ARM Aerial Facility managed by DOE's Pacific Northwest National Laboratory (PNNL). For this study it will be outfitted with 55 atmospheric instrument systems that will provide detailed "on-site" measures of the physical and chemical properties of aerosols and cloud particles, including variations in their distribution throughout the horizontal and vertical structure of the clouds. One instrument on board for the first time on a DOE aircraft will be a Holographic Detector for Clouds (HOLODEC), a wing-mounted device that takes detailed multidimensional pictures of cloud droplets to provide information about how they interact on microphysical scales. The data will help the scientists learn how the size distribution of cloud droplets is affected by the mixing of cloudy, wet air and drier, non-cloudy air. On some flights, the scientists will also sample the free troposphere, high above the boundary layer clouds. Data collected there will help determine whether the observed aerosol particles are coming from local sources or being transported long distances. The G-1 will take off from a joint U.S.-Portugal airbase 90 kilometers from Graciosa, with a large fraction of the computer-choreographed flight tracks crossing within the ranges of the ground-based scanning radars. Flying within range of the radars will allow scientists to use the detailed microphysical information obtained from the aircraft flights to evaluate, validate, and improve how they interpret the ground-based data. Collaborators will also bring samples of aerosol particles collected during the flights to PNNL's Environmental Molecular Sciences Laboratory and Lawrence Berkeley National Laboratory's Advanced Light Source--both DOE Office of Science User Facilities--for subsequent analysis. "Overall, the combination of ground-based and airborne data will give us a more complete picture of aerosols and clouds, particularly allowing us to better quantify the processes controlling the life cycles of aerosol and clouds, and how clouds respond to aerosol changes," said Wang. The ARM Climate Research Facility is a DOE Office of Science User Facility. The ARM Facility is operated by nine DOE national laboratories. Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | April 7, 2017
Site: www.csmonitor.com

A worker climbs stairs among some of the 2,000 pressure vessels that will be used to convert seawater into fresh water through reverse osmosis in the western hemisphere's largest desalination plant in Carlsbad, Calif., on Wednesday, March 11, 2015. —In recent decades, arid regions around the world have turned to the seas for drinking water. Desalination – removing salt from seawater – offers dry places from the Middle East to the American Southwest an alternative to scarce rainfall and groundwater. With 1.8 billion people projected to live “in countries or regions with absolute water scarcity” by 2025, the demand for solutions to water-supply challenges looks set to grow in coming years, according to the United Nations. Better filtration membranes could make desalination more viable as one of these solutions. Most modern plants use reverse osmosis, an energy-intensive process of pumping seawater through membranes to filter out the salt. Since the 1960s, these membranes have almost always been made of polymers. But recent research has aimed to reduce the pumping energy by instead making them from “nanomaterials.” On Monday, British researchers at the University of Manchester’s National Graphene Institute in England announced that they had developed a membrane from one such material: graphene oxide. Their new membrane marks a step forward for desalination technology. But it still faces a challenging road to the world’s seawater treatment plants, suggesting that the search for solutions to humanity’s thirst is far from over. “The big cost is the energy cost of desalination and the reliability of the membranes, how long they last,” explains Peter Gleick, co-founder and president emeritus of the Pacific Institute in Oakland, Calif. “I think that the current research results are a great step forward for both of those issues … but in the end, it's really going to depend on our ability to commercialize this process.” For more than three decades, Dr. Gleick has researched and worked to raise awareness of water availability issues. During that time, some cities have seen the cost per cubic meter of desalinated water drop by half, from $1.50 to 75 cents. But in a phone interview, Gleick tells the Monitor that most gains have come as “incremental improvements,” with membrane costs dropping and lifetimes extending. In many locales, these gains haven’t been enough to justify the costs of desalination. In 2015, “a thousand gallons of freshwater from a desalination plant costs the average US consumer $2.50 to $5 ... compared to $2 for conventional freshwater,” reported Public Radio International. “There hasn't been what I would describe as a revolutionary breakthrough,” Gleick says. In recent years, graphene – a crystalline layer of carbon just one atom thick – seemed to promise such a breakthrough. Computer models by researchers at the Massachusetts Institute of Technology in Cambridge, Mass., “showed that graphene membranes could cut the energy used in reverse osmosis by 15 to 46 percent. Even better, the high permeability could mean that far less surface area is needed to get the job done, so the entire [desalination] plant could be half the size,” reports David Talbot for MIT Technology Review. However, “It has been difficult to produce large quantities of single-layer graphene using existing methods,” says Jijo Abraham, a research scholar at the University of Manchester and co-author on the new study in an email to the Monitor. He adds that “current production routes are also quite costly.” Just four square centimeters of the substance, available on Graphenea.com, will set you back $146. “However,” he continues, “graphene oxide,” which consists of a graphene-like lattice of carbon atoms, studded with oxygen groups, “can be produced by the simple oxidation of the parent material graphite (a naturally available material) and is very cheap to produce industrially.” The researchers used this material to develop their membranes, explains materials scientist Ram Devanathan, who wrote a companion commentary accompanying the study in Nature Nanotechnology. Currently the acting division director of the Earth Systems Science Division at Pacific Northwest National Laboratory, Dr. Devanathan stresses in an email that the team demonstrated ways to “control layer spacing,” keeping the graphene oxide layers from drifting apart when immersed in water. That’s crucial for desalination: A membrane’s pores need to be large enough to let water molecules through, but small enough to keep salt’s potassium and chloride ions out. To understand how the new membrane accomplishes this goal, he suggests: “Think of a stack of papers, except the individual sheets are not flat but have ripples. Now apply epoxy layers to the top and bottom of the stack to keep the stack from changing size. In this analogy, the graphene oxide layers are the individual sheets of paper.” Mr. Abraham, currently pursuing his PhD in physics, says that the methods they tested “are quite effective for desalination in a lab scale.” But crops and consumers need water on more than just a lab scale. While Gleick of the Pacific Institute says he’s “as hopeful as the next person that there will be a big breakthrough,” he’s also clear-eyed about the challenges that such giant leaps face. For the graphene oxide membranes to see use in treatment plants, “they have to be able to produce very large volume membranes that are very reliable in the long run.” He also cautions that any new technology will generate salty, sea-life-threatening brine as waste. Meanwhile, desalination may be reaching the limits of its efficiency. The laws of thermodynamics set a “theoretical limit to how much energy is required to basically take salt out of water … current technology is pretty close to that.” Can graphene oxide still do better? Abraham acknowledges that “more studies are needed to make it practically possible on an industrial scale,” adding that the team still lacks data on the membranes’ long-term performance. “Achieving a defect-free, large area membrane will be a major challenge for us,” he continues. “We are expecting collaboration with other membrane manufacturers to tackle this issue and take this technology from the lab scale to the pilot scale in a few years’ time.” “In principle, we do not see any major challenges” for large-scale production, he says, “but we need support from the industry.” But in many corners of the world, utilities may see less high-tech methods as the best way to meet their customers’ needs. In California, for example, “a dollar spent on conservation or efficiency, or advanced wastewater treatment and reuse, gives you more than a dollar spent on desalination,” says Gleick. So far, the presence or lack of other options has determined the degree to which different places desalinate. While the first reverse-osmosis plant was built in California, the Golden State’s once-abundant rivers and snowmelt discouraged investment in the technology, the Monitor’s Christa Case Bryant explained in 2015. Instead, it was Israeli firms, trying to sustain farms in the parched Middle East, who honed the method. Now, drier climates may be making desalination a more cost-effective option, with Israeli firms bringing their expertise to California and elsewhere. This trend could give the new graphene oxide membrane a home in future treatment plants. But Gleick predicts that different places will still need different tools for navigating a drier future. "Wherever you are," he advises, "think carefully about the economics of your options, and the environmental and social and political factors also are important. And that will determine where you want to spend money."


News Article | April 28, 2017
Site: www.greencarcongress.com

« BorgWarner 48V eBooster electrically driven compressor debuts on Daimler engine | Main | Oita team develops new process for producing hydrogen from ammonia without external heat source » A team from Pacific Northwest National Laboratory (PNNL), with colleagues from General Motors R&D, has developed a design principle for stable high-capacity Li-ion battery silicon (Si) anodes of controlled thickness swelling, and synthesized a porous Si/C-graphite electrode as an example. Experimental data from the porous Si/C-graphite electrode showed excellent agreement with the theoretical design. The practical electrode (~3 mAh/cm2 loading) with ~650 mAh/g specific capacity had ~82% capacity retention over 450 cycles. The initial electrode swelling upon full lithiation is ⅓Mn Co )O and the pre-lithiated anode demonstrated >84% capacity retention over 300 cycles. A paper on the work is published in the RSC journal Energy & Environmental Science. As has been noted many times, silicon has a high theoretical specific capacity of ~4200 mAh/g, making it a very attractive anode candidate for a high energy capacity Li-ion. However, the large volume expansion (>300%) during the lithiation process has posed commercialization challenges due to the resulting fast capacity fade of the electrode. Many different approaches have been devised to addressing this problem, and significant progress has been made, the PNNL-GM team notes. However: Despite of these efforts, electrode swelling of Si-based anodes, which is one of the most critical challenges for the practical application of batteries has been much less investigated. Limited swelling of yolk-shell structure Si-C composite electrode was demonstrated yet it is based on very thin electrodes. Recently, Si-nanolayer-embedded graphite prepared using the chemical vapor deposition of Si on graphite was reported to have ~517 mAh/g specific capacity and 96% capacity retention over 100 cycles with 50-cycle end-of-life swelling of ~38%. However, new approaches are still needed to further increase the specific capacity, improve the cyclability, and reduce the cost for large scale production of Si-based anodes. The researchers synthesized the porous Si/C using electrochemical etching with controlled porous structure in which thin crystalline Si walls (~10 to 20 N·m) formed around large pores of up to ~50 nm in diameter. This was mixed with graphite at a controlled ratio targeting a specific capacity of ~650 mAh/g. Upon full lithiation, the material exhibited a linear expansion of only ~13% along the pore direction and ~8% perpendicular to the pore direction comparing to the 60% linear expansion of Si nanoparticles. By controlling the weight ratio of the porous Si/C in electrodes, the team could fine-tune the capacity and the cyclability for different applications. Limited swelling of the porous Si/C-graphite electrode was demonstrated by in situ electrochemical dilatometer and corroborated by cross-section scanning electron microscopy (SEM) images of the electrodes before and after cycling. The limited swelling and excellent electrochemical performance of the porous Si/C-graphite electrodes in half-cells and full-cells indicate that they have great potential as practical and stable high capacity anode required for next generation Li-ion batteries. Using porous Si by other scalable and economical methods, such as thermite reduction of SiO or chemical etching of metallurgical grade, micron-sized silicon powder, it is believed that practical Si-based anodes will enter the large scale market soon. The approaches for the rational design of practical electrodes developed in this work may also guide the development of other electrodes that may experience large volume changes during its operation processes.


News Article | May 4, 2017
Site: www.materialstoday.com

Silky chocolate, a better medical drug and solar panels all require the same thing: just the right crystals making up the material. Scientists trying to understand the pathways crystals take as they form have now shown they can influence that pathway by modifying the starting ingredient. The insights gained from this work, reported in a paper in Nature Materials, could eventually give scientists better control over the design of a variety of products for energy and medical technologies. "The findings address an ongoing debate about crystallization pathways," said materials scientist Jim De Yoreo at the US Department of Energy's Pacific Northwest National Laboratory (PNNL) and the University of Washington. "They imply you can control the various stages of materials assembly by carefully choosing the structure of your starting molecules." Diamonds are one of the simplest crystals, composed of one atom – carbon. But in the living world, crystals, like the ones formed by cocoa butter in chocolate or the ill-formed ones that cause sickle cell anemia, are made from molecules like proteins that are long, floppy and contain a lengthy, well-defined sequence of many atoms. They can crystallize in a variety of ways, but only one way is best. In pharmaceuticals, these various crystallization pathways can mean the difference between a drug that works versus one that doesn't. Chemists don't yet have enough control over the crystallization process to ensure the creation of the best crystal form, partly because chemists aren't sure how the earliest steps in crystallization occur. A particular debate has focused on whether complex molecules can assemble directly, with one molecule attaching to another, like adding one playing card at a time to a deck. Chemists call this a one-step process, the mathematical rules for which they have long understood. The other possibility is that crystals require two steps to form. Experiments suggest that, at the beginning of the crystallization process, molecules first form a disordered clump and then start rearranging into a crystal from within that group, as if the cards have to be mixed into a pile first before they can form a deck. De Yoreo and his colleagues wanted to determine if crystallization always requires the disordered step, and if not, why not. To do so, the scientists formed crystals from a somewhat simplified version of a protein, known as a peptoid. This peptoid was not complicated – just a string of two repeating chemical subunits (think ‘ABABAB’) – yet was still complex because it was a dozen subunits long. Based on its symmetrical chemical nature, the team expected multiple molecules to come together into a larger structure, as if they were Lego blocks snapping together. In a second series of experiments, they wanted to test how a slightly more complicated molecule assembled. So, the team added a molecule onto the initial ABABAB... sequence that stuck out like a tail. The tails attracted each other, and the team expected their association would cause the new molecules to clump. But they weren't sure what would happen afterwards. The researchers immersed the peptoid molecules in solution to let them crystallize. Then they used a variety of analytical techniques to see what shapes the peptoids made and how fast, discovering that the two different peptoids formed crystals in very different fashions. As the scientists mostly expected, the simpler peptoid formed initial crystals a few nanometers in size that grew longer and taller as more of the peptoid molecules snapped into place. The simple peptoid followed all the rules of a one-step crystallization process. But thrusting a tail into the mix disrupted the calm, causing a complex series of events to take place before the crystals appeared. Overall, the team showed that this more complicated peptoid first clumped together into small clusters that were unseen with the simpler molecules. Some of these clusters settled onto the available surface, where they sat unchanging before suddenly converting into crystals and eventually growing into the same crystals seen with the simple peptoid. This behavior was something new and required a different mathematical model to describe it, according to the researchers. Understanding the new rules will allow researchers to determine the best way to crystallize molecules. "We were not expecting that such a minor change makes the peptoids behave this way," said De Yoreo. "The results are making us think about the system in a new way, which we believe will lead to more predictive control over the design and assembly of biomimetic materials." This story is adapted from material from Pacific Northwest National Laboratory, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


As particles move in solution, they start “feeling” interactions from distances comparable with their size. Correlated motion of ions drives nanoparticles to approach each other facing matching sides. Then the same ion-induced forces direct fine adjustments of the particles’ mutual orientation so that atoms on the surfaces are arranged as an ideal crystal. Then solvent leaves the gap between the particles, and a defect-free crystal is formed. Credit: Pacific Northwest National Laboratory Look closely enough, and you'll see ingenious patterns everywhere in nature. Scientists and engineers have long understood this, but mimicking Mother Nature in building such patterns— especially highly ordered crystal structure—has proven challenging. Recently, Maria Sushko and Kevin Rosso at Pacific Northwest National Laboratory (PNNL) significantly advanced understanding by clarifying the driving forces behind particle-based crystal growth with their new computational approach. They learned that crystal growth depends on the subtle balance of interactions between atoms, ions, molecules, and particles. Their discovery holds significant promise for creating materials to address energy challenges. In natural crystal-growth processes, nanoparticle building blocks attach along specific crystal faces. Studying these examples, researchers were inspired to contemplate how they might create similar crystal structures for a range of practical applications including energy storage. Armed with a greater understanding of the fundamental processes underlying the pathways of crystal growth, researchers could control these processes to synthesize new materials with precise detail. In their research, Sushko and Rosso found that coordinated motion of ions close to nanoparticle surfaces drive the way nanoparticles arrange into matching crystal shapes and structures. They discovered that ions in solution can direct the rotation of nanoparticles into a matching crystal orientation-mimicking nature's pattern precisely-to produce perfect crystals. The PNNL researchers' discovery provides key fundamental insights into geochemical processes leading to mineral formation, and helps to create complex, hierarchical, single-crystal structures in the lab. It also holds promise for eventually creating innovative materials for consumer electronics, batteries, and more. According to Sushko, their new computational approach creates "a new paradigm in knowledge-based synthesis of highly ordered three-dimensional crystal structures" for a range of practical applications in catalysis and energy-storage technologies. Rosso and Sushko developed a new multi-scale computational model that encompasses the essential forces acting between atoms, molecules, and particles. Their approach spans the length scales from Angstrom to half a micron and is fully transferable to a wide range of systems. The method is rooted deeply in quantum mechanics and provides a parameter-free approach for modeling experimentally relevant systems. Their new computational approach is a major step towards developing a comprehensive theory of particle-based crystallization. Future research will extend the model to include a broader range of macroscopic forces, such as magnetic and electric polarization. The model will be also further applied to other materials to get insight into various crystallization pathways. Explore further: One-pot technique creates structures with potential for more efficient manufacturing and energy storage More information: M. L. Sushko et al. The origin of facet selectivity and alignment in anatase TiOnanoparticles in electrolyte solutions: implications for oriented attachment in metal oxides, Nanoscale (2016). DOI: 10.1039/c6nr06953c


This science illustration demonstrates the collapsing of the balloon-shaped-reaction product during the charging of the lithium-oxygen battery. Credit: Environmental Molecular Sciences Laboratory For the lithium-oxygen battery system, it is well recognized the charging and discharging reaction produces peculiar reaction product shapes that resemble doughnuts and balloons. Yet, how these shapes form has remained a mystery. A new study of a functioning nano-lithium-oxygen battery at atomic scale in an oxygen atmosphere provides clues for solving this mystery. The discovery of the lithium-oxygen reaction pathway sets the foundation for quantitative modeling of electrochemical processes in the lithium-oxygen system, providing insight into how best to design lithium-oxygen batteries with high capacity and longer cycle life. The lithium-oxygen battery system has been perceived as an enabling technology for the electromotive industry. However, progress in research and development of a lithium-oxygen battery has been severely hampered by two unanswered questions. First, what is the electrochemical reaction route when discharging and charging the battery? Second, what is the relationship between the complicated shapes of the reaction product and the reaction path? Answers to these two questions are fundamental, yet essential for development of the lithium-oxygen batteries. To address this knowledge gap, a team of researchers from Pacific Northwest National Laboratory; Tianjin Polytechnic University of China; and EMSL, the Environmental Molecular Sciences Laboratory, used advanced in-situ imaging techniques—the environmental transmission electron microscope—at EMSL, a Department of Energy Office of Science user facility, to observe a nano-lithium-oxygen battery during charging and discharging. They found oxygen reacts with lithium on carbon nanotubes to form a metastable lithium oxide. This oxide transforms into a more stable lithium oxide and releases oxygen gas that expands (inflates) particles into a hollow structure, producing doughnut and balloon shapes. This observation more generally demonstrates that the way the released oxygen is accommodated governs the formation of the complicated morphology of the reaction product in a lithium-oxygen battery. The results of this work not only answer the two questions outlined above, but also provide insight into ion and electron transport coupled with mass flow for the lithium-oxygen battery. More information: Langli Luo et al. Revealing the reaction mechanisms of Li–O2 batteries using environmental transmission electron microscopy, Nature Nanotechnology (2017). DOI: 10.1038/NNANO.2017.27


News Article | April 27, 2017
Site: www.greencarcongress.com

« California Assembly weighing integrating air pollution performance into GHG cap-and-trade | Main | IEA: Global oil discoveries and new projects fell to historic lows in 2016 while US shale surged; “two-speed” market » Lithium-air batteries are looked to by many as a very high-energy density next-generation energy storage solution for electric vehicles. However, the technology has several holdups, including losing energy as it stores and releases its charge.The reaction mechanisms are, in general, not well understood. One reaction that hasn’t been fully explained is how oxygen blows bubbles inside a lithium-air battery when it discharges. The bubbles expand the battery and create wear and tear that can cause it to fail. Now, researchers from Pacific Northwest National Laboratory (PNNL) have provided the first step-by-step explanation of how lithium-air batteries form bubbles. The paper is published in the journal The research was aided by a first-of-a-kind video that shows bubbles inflating and later deflating inside a nanobattery. Researchers had previously only seen the bubbles, but not how they were created. The performances of a Li–O battery depend on a complex interplay between the reaction mechanism at the cathode, the chemical structure and the morphology of the reaction products, and their spatial and temporal evolution; all parameters that, in turn, are dependent on the choice of the electrolyte. In an aprotic cell, for example, the discharge product, Li O , forms through a combination of solution and surface chemistries that results in the formation of a baffling toroidal morphology. In a solid electrolyte, neither the reaction mechanism at the cathode nor the nature of the reaction product is known. Here we report the full-cycle reaction pathway for Li–Obatteries and show how this correlates with the morphology of the reaction products. Using aberration-corrected environmental transmission electron microscopy (TEM) under an oxygen environment, we image the product morphology evolution on a carbon nanotube (CNT) cathode of a working solid-state Li–Onanobattery and correlate these features with the electrochemical reaction at the electrode. We find that the oxygen-reduction reaction (ORR) on CNTs initially produces LiO, which subsequently disproportionates into Liand O. The release of Ocreates a hollow nanostructure with LiO outer-shell and Liinner-shell surfaces. Our findings show that, in general, the way the released Ois accommodated is linked to lithium-ion diffusion and electron-transport paths across both spatial and temporal scales; in turn, this interplay governs the morphology of the discharging/charging products in Li–Ocells. Wang works out of EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science user facility located at PNNL. His co-authors include other PNNL staff and a researcher from Tianjin Polytechnic University in China. The team’s unique video was captured with an in-situ environmental transmission electron microscope at EMSL. Wang and his colleagues built their tiny battery inside the microscope’s column. This enabled them to watch as the battery charged and discharged inside. Video evidence led the team to propose that as the battery discharges, a sphere of lithium superoxide jets out from the battery’s positive electrode and becomes coated with lithium oxide. The sphere’s superoxide interior then goes through a chemical reaction that forms lithium peroxide and oxygen. Oxygen gas is released and inflates the bubble. When the battery charges, lithium peroxide decomposes, and leaves the former bubble to look like a deflated balloon. This finding was the focus of a Nature News & Views column written by researchers at Korea’s Hanyang University, who describe the research as “a solid foundation for future Li-O battery designs and optimization.” This research was supported by DOE’s Office of Energy Efficiency and Renewable Energy.


News Article | April 26, 2017
Site: www.eurekalert.org

RICHLAND, Wash. -- With about three times the energy capacity by weight of today's lithium-ion batteries, lithium-air batteries could one day enable electric cars to drive farther on a single charge. But the technology has several holdups, including losing energy as it stores and releases its charge. If researchers could better understand the basic reactions that occur as the battery charges and discharges, the battery's performance could be improved. One reaction that hasn't been fully explained is how oxygen blows bubbles inside a lithium-air battery when it discharges. The bubbles expand the battery and create wear and tear that can cause it to fail. A new paper in Nature Nanotechnology provides the first step-by-step explanation of how lithium-air batteries form bubbles. The research was aided by a first-of-a-kind video that shows bubbles inflating and later deflating inside a nanobattery. Researchers had previously only seen the bubbles, but not how they were created. "If we fully understand the bubble formation process, we could build better lithium-air batteries that create fewer bubbles," noted the paper's corresponding author, Chongmin Wang, of the Department of Energy's Pacific Northwest National Laboratory. "The result could be more compact and stable batteries that hold onto their charge longer." Wang works out of EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science user facility located at PNNL. His co-authors include other PNNL staff and a researcher from Tianjin Polytechnic University in China. The team's unique video may be a silent black-and-white film, but it provides plenty of action. Popping out from the battery's flat surface is a grey bubble that grows bigger and bigger. Later, the bubble deflates, the top turning inside of itself until only a scrunched-up shell is left behind. The popcorn-worthy flick was captured with an in-situ environmental transmission electron microscope at EMSL. Wang and his colleagues built their tiny battery inside the microscope's column. This enabled them to watch as the battery charged and discharged inside. Video evidence led the team to propose that as the battery discharges, a sphere of lithium superoxide jets out from the battery's positive electrode and becomes coated with lithium oxide. The sphere's superoxide interior then goes through a chemical reaction that forms lithium peroxide and oxygen. Oxygen gas is also released and inflates the bubble. When the battery charges, lithium peroxide decomposes, and leaves the former bubble to look like a deflated balloon. This finding was the focus of a Nature News & Views column written by researchers at Korea's Hanyang University, who describe the research as "a solid foundation for future Li?O2 battery designs and optimization." This research was supported by DOE's Office of Energy Efficiency and Renewable Energy. PAPER: Langli Luo, Bin Liu, Shidong Song, Wu Xu, Ji-Guang Zhang, Chongmin Wang, "Revealing the reaction mechanisms of Li-O2 batteries using environmental transmission electron microscopy," Nature Nanotechnology, March 27, 2017, doi:10.1038/nnano.2017.27, http://www. . Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, Instagram, LinkedIn and Twitter. EMSL, the Environmental Molecular Sciences Laboratory, is a DOE Office of Science User Facility. Located at Pacific Northwest National Laboratory in Richland, Wash., EMSL offers an open, collaborative environment for scientific discovery to researchers around the world. Its integrated computational and experimental resources enable researchers to realize important scientific insights and create new technologies. Follow EMSL on Facebook, LinkedIn and Twitter.


News Article | April 28, 2017
Site: www.cemag.us

With about three times the energy capacity by weight of today's lithium-ion batteries, lithium-air batteries could one day enable electric cars to drive farther on a single charge. But the technology has several holdups, including losing energy as it stores and releases its charge. If researchers could better understand the basic reactions that occur as the battery charges and discharges electricity, the battery's performance could be improved. One reaction that hasn't been fully explained is how oxygen blows bubbles inside a lithium-air battery when it discharges. The bubbles expand the battery and create wear and tear that can cause it to fail. A paper in Nature Nanotechnology provides the first step-by-step explanation of how lithium-air batteries form bubbles. The research was aided by a first-of-a-kind video that shows bubbles inflating and later deflating inside a nanobattery. Researchers had previously only seen the bubbles, but not how they were created. "If we fully understand the bubble formation process, we could build better lithium-air batteries that create fewer bubbles," notes the paper's corresponding author, Chongmin Wang, of the Department of Energy's Pacific Northwest National Laboratory. "The result could be more compact and stable batteries that hold onto their charge longer." Wang works out of EMSL, the Environmental Molecular Sciences Laboratory, a DOE Office of Science user facility located at PNNL. His co-authors include other PNNL staff and a researcher from Tianjin Polytechnic University in China. The team's unique video may be a silent black-and-white film, but it provides plenty of action. Popping out from the battery's flat surface is a grey bubble that grows bigger and bigger. Later, the bubble deflates, the top turning inside of itself until only a scrunched-up shell is left behind. The popcorn-worthy flick was captured with an in-situ environmental transmission electron microscope at EMSL. Wang and his colleagues built their tiny battery inside the microscope's column. This enabled them to watch as the battery charged and discharged inside. Video evidence led the team to propose that as the battery discharges, a sphere of lithium superoxide jets out from the battery's positive electrode and becomes coated with lithium oxide. The sphere's superoxide interior then goes through a chemical reaction that forms lithium peroxide and oxygen. Oxygen gas is released and inflates the bubble.  When the battery charges, lithium peroxide decomposes, and leaves the former bubble to look like a deflated balloon. This finding was the focus of a Nature News & Views column written by researchers at Korea's Hanyang University, who describe the research as "a solid foundation for future Li-O2 battery designs and optimization." This research was supported by DOE's Office of Energy Efficiency and Renewable Energy.


Roehling J.D.,University of California at Davis | Batenburg K.J.,Centrum Wiskunde and Informatica | Batenburg K.J.,University of Antwerp | Swain F.B.,Luna Innovations, Inc. | And 2 more authors.
Advanced Functional Materials | Year: 2013

The three-dimensional morphology of mixed organic layers are quantitatively measured using high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) with electron tomography for the first time. The mixed organic layers used for organic photovoltaic applications have not been previously imaged using STEM tomography as there is insufficient contrast between donor and acceptor components. Contrast is generated by substituting fullerenes with endohedral fullerenes that contain a Lu3N cluster within the fullerene cage. The high contrast and signal-to-noise ratio, in combination with use of the discrete algebraic reconstruction technique (DART), allows generation of the most detailed and accurate three-dimensional map of BHJ morphology to date. From the STEM-tomography reconstructions it is determined that three distinct material phases are present within the BHJs. By observing changes to morphology and mixing ratio during thermal and solvent annealing, the effects of mutual solubility and fullerene crystallization on morphology and long term stability are determined. This material/technique combination shows itself as a powerful tool for examining morphology in detail and allows for observation of nanoscopic changes in local concentration. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Zhang X.,Pacific Northwest National Laboratory | Srinivasan R.,Texas A&M University
Hydrological Processes | Year: 2010

With the availability of spatially distributed data, distributed hydrologic models are increasingly used for simulation of spatially varied hydrologic processes to understand and manage natural and human activities that affect watershed systems. Multi-objective optimization methods have been applied to calibrate distributed hydrologic models using observed data from multiple sites. As the time consumed by running these complex models is increasing substantially, selecting efficient and effective multi-objective optimization algorithms is becoming a nontrivial issue. In this study, we evaluated a multi-algorithm, genetically adaptive multi-objective method (AMALGAM) for multi-site calibration of a distributed hydrologic model-Soil and Water Assessment Tool (SWAT), and compared its performance with two widely used evolutionary multi-objective optimization (EMO) algorithms (i.e. Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Non-dominated Sorted Genetic Algorithm II (NSGA-II)). In order to provide insights into each method's overall performance, these three methods were tested in four watersheds with various characteristics. The test results indicate that the AMALGAM can consistently provide competitive or superior results compared with the other two methods. The multi-method search framework of AMALGAM, which can flexibly and adaptively utilize multiple optimization algorithms, makes it a promising tool for multi-site calibration of the distributed SWAT. For practical use of AMALGAM, it is suggested to implement this method in multiple trials with relatively small number of model runs rather than run it once with long iterations. In addition, incorporating different multi-objective optimization algorithms and multi-mode search operators into AMALGAM deserves further research. Copyright © 2009 John Wiley & Sons, Ltd.


Lu J.,University of California at Davis | Aydin C.,University of California at Davis | Browning N.D.,University of California at Davis | Browning N.D.,Pacific Northwest National Laboratory | Gates B.C.,University of California at Davis
Angewandte Chemie - International Edition | Year: 2012

As good as atomic gold: Aberration-corrected scanning transmission electron microscopy images of zeolite NaY-supported mononuclear gold complexes, obtained with atomic resolution of the gold atoms, showed the locations of the gold complexes in the zeolite framework (see picture) and identified them as the catalytically active species for CO oxidation at 298K and 1bar. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Aydin C.,University of California at Davis | Lu J.,University of California at Davis | Browning N.D.,University of California at Davis | Browning N.D.,Pacific Northwest National Laboratory | Gates B.C.,University of California at Davis
Angewandte Chemie - International Edition | Year: 2012

Like billiard balls: Atomic-scale observations by electron microscopy of supported iridium nanoclusters show that the nanoclusters aggregate to reach a critical diameter of approximately 1nm and then resist further aggregation (see picture). The observations highlight the potential for this catalyst to assemble into clusters that may be nearly optimum for catalytic activity. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Zhou Y.,Pacific Northwest National Laboratory | Eom J.,Sogang University | Clarke L.,Pacific Northwest National Laboratory
Climatic Change | Year: 2013

Climate change will affect the energy system in a number of ways, one of which is through changes in demands for heating and cooling in buildings. Understanding the potential effect of climate change on heating and cooling demands requires taking into account not only the manner in which the building sector might evolve over time, but also important uncertainty about the nature of climate change itself. In this study, we explore the uncertainty in climate change impacts on heating and cooling requirement by constructing estimates of heating and cooling degree days (HDD/CDDs) for both reference (no-policy) and 550 ppmv CO2 concentration pathways built from three different Global Climate Models (GCMs) output and three scenarios of gridded population distribution. The implications that changing climate and population distribution might have for building energy consumption in the U.S. and China are then explored by using the results of HDD/CDDs as inputs to a detailed, building energy model, nested in the long-term global integrated assessment framework, Global Change Assessment Model (GCAM). The results across the modeled changes in climate and population distributions indicate that unabated climate change would cause building sector's final energy consumption to decrease modestly (6 % decrease or less depending on climate models) in both the U.S. and China by the end of the century as decreased heating consumption more than offsets increased cooling using primarily electricity. However, global climate change virtually has negligible effect on total CO2 emissions in the buildings sector in both countries. The results also indicate more substantial implications for the fuel mix with increases in electricity and decreases in other fuels, which may be consistent with climate mitigation goals. The variation in results across all scenarios due to variation of population distribution is smaller than variation due to the use of different climate models. © 2013 Springer Science+Business Media Dordrecht.


Zheng J.,Pacific Northwest National Laboratory
Electrochimica Acta | Year: 2013

High capacity cathode materials Li[Li0.2Mn0.54Ni 0.13Co0.13]O2-xFx (x = 0, 0.05 and 0.10) have been synthesized by a sol-gel method using NH4F as F source. The effects of fluorine content on the structure, morphology and electrochemical performance of the Li[Li0.2Mn0.54Ni 0.13Co0.13]O2-xFx have been extensively studied. With fluorine doping, cycling stability of Li[Li 0.2Mn0.54Ni0.13Co0.13]O 2-xFx is significantly improved because of the stabilization of the host structure. Li[Li0.2Mn0.54Ni 0.13Co0.13]O1.95F0.05 shows a capacity retention of 88.1% after 50 cycles at 0.2 C at room temperature, much higher than that of 72.4% for pristine one. The improvement mechanism of fluorine doping has been investigated by electrochemical impedance spectroscopy (EIS) and X-ray photoelectron spectroscopy (XPS). The results demonstrate that fluorine incorporation stabilizes the electrode/electrolyte interface by suppressing the formation of poorly conducting LiF in the SEI layer and thus maintains stable interfacial resistances. As compared to the enhanced material structure, the stabilized electrode/electrolyte interface is the primary factor contributing to the improved electrochemical performance. In addition, the thermal stability of fully delithiated electrode is also greatly improved by fluorine doping. © 2013 Elsevier Ltd. All rights reserved.


Woehl T.J.,University of California at Davis | Woehl T.J.,Ames Laboratory | Park C.,Florida State University | Evans J.E.,Environmental and Molecular science | And 3 more authors.
Nano Letters | Year: 2014

Direct observations of solution-phase nanoparticle growth using in situ liquid transmission electron microscopy (TEM) have demonstrated the importance of "non-classical" growth mechanisms, such as aggregation and coalescence, on the growth and final morphology of nanocrystals at the atomic and single nanoparticle scales. To date, groups have quantitatively interpreted the mean growth rate of nanoparticles in terms of the Lifshitz-Slyozov-Wagner (LSW) model for Ostwald ripening, but less attention has been paid to modeling the corresponding particle size distribution. Here we use in situ fluid stage scanning TEM to demonstrate that silver nanoparticles grow by a length-scale dependent mechanism, where individual nanoparticles grow by monomer attachment but ensemble-scale growth is dominated by aggregation. Although our observed mean nanoparticle growth rate is consistent with the LSW model, we show that the corresponding particle size distribution is broader and more symmetric than predicted by LSW. Following direct observations of aggregation, we interpret the ensemble-scale growth using Smoluchowski kinetics and demonstrate that the Smoluchowski model quantitatively captures the mean growth rate and particle size distribution. © 2013 American Chemical Society.


Zheng J.,Pacific Northwest National Laboratory | Gu M.,Pacific Northwest National Laboratory | Genc A.,FEI Company | Xiao J.,Pacific Northwest National Laboratory | And 7 more authors.
Nano Letters | Year: 2014

Lithium- and manganese-rich (LMR) layered-structure materials are very promising cathodes for high energy density lithium-ion batteries. However, their voltage fading mechanism and its relationships with fundamental structural changes are far from being well understood. Here we report for the first time the mitigation of voltage and energy fade of LMR cathodes by improving the atomic level spatial uniformity of the chemical species. The results reveal that LMR cathodes (Li[Li0.2Ni0.2M0.6]O2) prepared by coprecipitation and sol-gel methods, which are dominated by a LiMO2 type R3̄m structure, show significant nonuniform Ni distribution at particle surfaces. In contrast, the LMR cathode prepared by a hydrothermal assisted method is dominated by a Li2MO3 type C2/m structure with minimal Ni-rich surfaces. The samples with uniform atomic level spatial distribution demonstrate much better capacity retention and much smaller voltage fade as compared to those with significant nonuniform Ni distribution. The fundamental findings on the direct correlation between the atomic level spatial distribution of the chemical species and the functional stability of the materials may also guide the design of other energy storage materials with enhanced stabilities. © 2014 American Chemical Society.


Smith S.J.,Pacific Northwest National Laboratory | Van Aardenne J.,European Commission - Joint Research Center Ispra | Van Aardenne J.,European Environment Agency | Klimont Z.,International Institute For Applied Systems Analysis | And 3 more authors.
Atmospheric Chemistry and Physics | Year: 2011

Sulfur aerosols impact human health, ecosystems, agriculture, and global and regional climate. A new annual estimate of anthropogenic global and regional sulfur dioxide emissions has been constructed spanning the period 1850-2005 using a bottom-up mass balance method, calibrated to country-level inventory data. Global emissions peaked in the early 1970s and decreased until 2000, with an increase in recent years due to increased emissions in China, international shipping, and developing countries in general. An uncertainty analysis was conducted including both random and systemic uncertainties. The overall global uncertainty in sulfur dioxide emissions is relatively small, but regional uncertainties ranged up to 30%. The largest contributors to uncertainty at present are emissions from China and international shipping. Emissions were distributed on a 0.5Å grid by sector for use in coordinated climate model experiments. © 2011 Author(s).


Qiu J.,Purdue University | Sheedlo M.J.,Purdue University | Yu K.,Peking University | Tan Y.,Purdue University | And 5 more authors.
Nature | Year: 2016

Signalling by ubiquitination regulates virtually every cellular process in eukaryotes. Covalent attachment of ubiquitin to a substrate is catalysed by the E1, E2 and E3 three-enzyme cascade, which links the carboxy terminus of ubiquitin to the ϵ-amino group of, in most cases, a lysine of the substrate via an isopeptide bond. Given the essential roles of ubiquitination in the regulation of the immune system, it is not surprising that the ubiquitination network is a common target for diverse infectious agents. For example, many bacterial pathogens exploit ubiquitin signalling using virulence factors that function as E3 ligases, deubiquitinases or as enzymes that directly attack ubiquitin. The bacterial pathogen Legionella pneumophila utilizes approximately 300 effectors that modulate diverse host processes to create a permissive niche for its replication in phagocytes. Here we demonstrate that members of the SidE effector family of L. pneumophila ubiquitinate multiple Rab small GTPases associated with the endoplasmic reticulum. Moreover, we show that these proteins are capable of catalysing ubiquitination without the need for the E1 and E2 enzymes. A putative mono-ADP-ribosyltransferase motif critical for the ubiquitination activity is also essential for the role of the SidE family in intracellular bacterial replication in a protozoan host. The E1/E2-independent ubiquitination catalysed by these enzymes is energized by nicotinamide adenine dinucleotide, which activates ubiquitin by the formation of ADP-ribosylated ubiquitin. These results establish that ubiquitination can be catalysed by a single enzyme, the activity of which does not require ATP. © 2016 Macmillan Publishers Limited. All rights reserved.


Woolf D.,University of Swansea | Amonette J.E.,Pacific Northwest National Laboratory | Street-Perrott F.A.,University of Swansea | Lehmann J.,Cornell University | Joseph S.,University of New South Wales
Nature Communications | Year: 2010

Production of biochar (the carbon (C)-rich solid formed by pyrolysis of biomass) and its storage in soils have been suggested as a means of abating climate change by sequestering carbon, while simultaneously providing energy and increasing crop yields. Substantial uncertainties exist, however, regarding the impact, capacity and sustainability of biochar at the global level. In this paper we estimate the maximum sustainable technical potential of biochar to mitigate climate change. Annual net emissions of carbon dioxide (CO 2), methane and nitrous oxide could be reduced by a maximum of 1.8 Pg CO2 -C equivalent (CO2 -Ce) per year (12% of current anthropogenic CO2 -Ce emissions; 1 Pg=1 Gt), and total net emissions over the course of a century by 130 Pg CO2 -Ce, without endangering food security, habitat or soil conservation. Biochar has a larger climate-change mitigation potential than combustion of the same sustainably procured biomass for bioenergy, except when fertile soils are amended while coal is the fuel being offset. © 2010 Macmillan Publishers Limited. All rights reserved.


Townsend T.K.,University of California at Davis | Browning N.D.,University of California at Davis | Browning N.D.,Pacific Northwest National Laboratory | Osterloh F.E.,University of California at Davis
Energy and Environmental Science | Year: 2012

NiO x (0 < x < 1) modified SrTiO 3 (STO) is one of the best studied photocatalysts for overall water splitting under UV light. The established mechanism for this and many other NiO x containing catalysts assumes water oxidation to occur at the early transition metal oxide and water reduction at NiO x. Here we show that NiO x-STO is more likely a three component Ni-STO-NiO catalyst, in which STO absorbs the light, Ni reduces protons, and NiO oxidizes water. This interpretation is based on systematic H 2/O 2 evolution tests of appropriately varied catalyst compositions using oxidized, chemically and photochemically added nickel and NiO nanoparticle cocatalysts. Surface photovoltage (SPV) measurements reveal that Ni(0) serves as an electron trap (site for water reduction) and that NiO serves as a hole trap (site for water oxidation). Electrochemical measurements show that the overpotential for water oxidation correlates with the NiO content, whereas the water reduction overpotential depends on the Ni content. Photodeposition experiments with NiCl 2 and H 2PtCl 6 on NiO-STO show that electrons are available on the STO surface, not on the NiO particles. Based on photoelectrochemistry, both NiO and Ni particles suppress the Fermi level in STO, but the effect of this shift on catalytic activity is not clear. Overall, the results suggest a revised role of NiO in NiO x-STO and in many other nickel-containing water splitting systems, including NiO x-La:KTaO 3, and many layered perovskites. © 2012 The Royal Society of Chemistry.


Li W.,University of Southern California | Lin G.,Pacific Northwest National Laboratory | Zhang D.,Peking University
Journal of Computational Physics | Year: 2014

The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms. © 2013 Elsevier Inc.


Li Z.,Beijing Normal University | Li Z.,Nanjing University of Information Science and Technology | Li Z.,University of Maryland University College | Niu F.,University of Maryland University College | And 4 more authors.
Nature Geoscience | Year: 2011

Aerosols alter cloud density and the radiative balance of the atmosphere. This leads to changes in cloud microphysics and atmospheric stability, which can either suppress or foster the development of clouds and precipitation. The net effect is largely unknown, but depends on meteorological conditions and aerosol properties. Here, we examine the long-term impact of aerosols on the vertical development of clouds and rainfall frequencies, using a 10-year dataset of aerosol, cloud and meteorological variables collected in the Southern Great Plains in the United States. We show that cloud-top height and thickness increase with aerosol concentration measured near the ground in mixed-phase clouds-which contain both liquid water and ice-that have a warm, low base. We attribute the effect, which is most significant in summer, to an aerosol-induced invigoration of upward winds. In contrast, we find no change in cloud-top height and precipitation with aerosol concentration in clouds with no ice or cool bases. We further show that precipitation frequency and rain rate are altered by aerosols. Rain increases with aerosol concentration in deep clouds that have a high liquid-water content, but declines in clouds that have a low liquid-water content. Simulations using a cloud-resolving model confirm these observations. Our findings provide unprecedented insights of the long-term net impacts of aerosols on clouds and precipitation. © 2011 Macmillan Publishers Limited. All rights reserved.


Broeer T.,University of Victoria | Fuller J.,Pacific Northwest National Laboratory | Tuffner F.,Pacific Northwest National Laboratory | Chassin D.,Pacific Northwest National Laboratory | Djilali N.,University of Victoria
Applied Energy | Year: 2014

Electricity generation from wind power and other renewable energy sources is increasing, and their variability introduces new challenges to the power system. The emergence of smart grid technologies in recent years has seen a paradigm shift in redefining the electrical system of the future, in which controlled response of the demand side is used to balance fluctuations and intermittencies from the generation side. This paper presents a modeling framework for an integrated electricity system where loads become an additional resource. Agent-based modeling is used to represent a smart grid power system integrating generators, transmission, distribution, loads and market. The model incorporates generator and load controllers, allowing suppliers and demanders to bid into a Real-Time Pricing (RTP) electricity market. The modeling framework is applied to represent a physical demonstration project conducted on the Olympic Peninsula, Washington, USA, and validation simulations are performed using actual dynamic data. Wind power is then introduced into the power generation mix illustrating the potential of demand response to mitigate the impact of wind power variability, primarily through thermostatically controlled loads. The results indicate that effective implementation of Demand Response (DR) to assist integration of variable renewable energy resources requires a diversity of loads to ensure functionality of the overall system. © 2013 Elsevier Ltd.


Woehl T.J.,University of California at Davis | Evans J.E.,Pacific Northwest National Laboratory | Arslan I.,Pacific Northwest National Laboratory | Ristenpart W.D.,University of California at Davis | And 2 more authors.
ACS Nano | Year: 2012

Although nanocrystal morphology is controllable using conventional colloidal synthesis, multiple characterization techniques are typically needed to determine key properties like the nucleation rate, induction time, growth rate, and the resulting morphology. Recently, researchers have demonstrated growth of nanocrystals by in situ electron beam reduction, offering direct observations of single nanocrystals and eliminating the need for multiple characterization techniques; however, they found nanocrystal morphologies consistent with two different growth mechanisms for the same electron beam parameters. Here we show that the electron beam current plays a role analogous to the concentration of reducing agent in conventional synthesis, by controlling the growth mechanism and final morphology of silver nanocrystals grown via in situ electron beam reduction. We demonstrate that low beam currents encourage reaction limited growth that yield nanocrystals with faceted structures, while higher beam currents encourage diffusion limited growth that yield spherical nanocrystals. By isolating these two growth regimes, we demonstrate a new level of control over nanocrystal morphology, regulated by the fundamental growth mechanism. We find that the induction threshold dose for nucleation is independent of the beam current, pixel dwell time, and magnification being used. Our results indicate that in situ electron microscopy data can be interpreted by classical models and that systematic dose experiments should be performed for all future in situ liquid studies to confirm the exact mechanisms underlying observations of nucleation and growth. © 2012 American Chemical Society.


Townsend T.K.,University of California at Davis | Browning N.D.,Pacific Northwest National Laboratory | Osterloh F.E.,University of California at Davis
ACS Nano | Year: 2012

SrTiO 3 (STO) is a large band gap (3.2 eV) semiconductor that catalyzes the overall water splitting reaction under UV light irradiation in the presence of a NiO cocatalyst. As we show here, the reactivity persists in nanoscale particles of the material, although the process is less effective at the nanoscale. To reach these conclusions, Bulk STO, 30 ± 5 nm STO, and 6.5 ± 1 nm STO were synthesized by three different methods, their crystal structures verified with XRD and their morphology observed with HRTEM before and after NiO deposition. In connection with NiO, all samples split water into stoichiometric mixtures of H 2 and O 2, but the activity is decreasing from 28 μmol H 2 g -1 h -1 (bulk STO), to 19.4 μmol H 2 g -1 h -1 (30 nm STO), and 3.0 μmol H 2 g -1 h -1 (6.5 nm STO). The reasons for this decrease are an increase of the water oxidation overpotential for the smaller particles and reduced light absorption due to a quantum size effect. Overall, these findings establish the first nanoscale titanate photocatalyst for overall water splitting. © 2012 American Chemical Society.


News Article | November 11, 2015
Site: www.greencarcongress.com

« Ballard closes $5M strategic investment from Nisshinbo Holdings | Main | Kespry and NVIDIA demonstrate deep learning for commercial autonomous drones; NVIDIA Jetson TX1 » Researchers at the US Department of Energy’s (DOE’s) Pacific Northwest National Laboratory (PNNL) have presented a new and more complete view on the way a cyanobacterium—Cyanothece 51142—produces hydrogen. Using genome-scale transcript and protein profiling, the team study presented and tested a new hypothesis on the metabolic relationship between oxygenic photosynthesis and nitrogenase-mediated H production in Cyanothece 51142. The results, reported in an open-access paper in Nature’s Scientific Reports, show that net-positive rates of oxygenic photosynthesis and increased expression of photosystem II reaction centers correspond and are synchronized with nitrogenase expression and H production. Photobiological H production is still a nascent technology with long-term potential for sustainable energy production with a low environmental impact. Although direct biophotolytic H production has been documented and studied for decades, significant challenges remain for the development of microbial strains and conditions that can directly and efficiently use sunlight and water to produce H . Chief among them is the low production rate, which is largely due to feedback inhibition of the H producing enzymes by O , an obligate byproduct of oxygenic photosynthesis. Limitations imposed by O sensitivity of the native hydrogenase and nitrogenase enzymes have motivated significant efforts to identify and even engineer O tolerant variants and multi-stage processes that temporally separate O and H evolution. However, to date, the kinetic rates and sustainability of hydrogenase-mediated H production are low in comparison to those reported for some diazotrophic organisms that produce H in oxic-environments as a byproduct of nitrogenase catalyzed N fixation. Nitrogen-fixing cyanobacteria have been recognized as one of the most promising photolytic platforms for sustainable H production. A unicellular marine strain Cyanothece sp. ATCC 51142 (hereafter Cyanothece 51142) has emerged as a model system because of its ability to produce H at rates > 100 μmol-H hr−1 mg-Chl−1 under photosynthetic conditions associated with continuous illumination. … The current and prevailing view assumes that H production mediated by energetically expensive nitrogenase activity in Cyanothece 51142, and other closely related strains, is exclusively supported by ATP and reductant derived from oxidation of intracellular glycogen and/or cyclic-electron flow around photosystem (PS) I. Here we present evidence to support a new model whereby energy derived directly from oxygenic photosynthesis (i.e., linear electron flow through PS II) is an important process in funding the energy budget required for nitrogenase activity under illuminated, nitrogen-deplete conditions. PNNL scientists found that the organism taps into an unexpected source of energy to create hydrogen. Researchers have known that 51142 makes hydrogen by drawing upon sugars that it has stored during growth. In this study, PNNL researchers found that the organism also draws on a second source of energy, using sunlight and water directly to make hydrogen. Organisms such as cyanobacteria made life on the planet possible by producing the oxygen for the atmosphere 2.3 billion years ago. They also convert the abundant nitrogen in the atmosphere to a form that is essential for all plant life on the planet. Many of these organisms are equipped with an enzyme called nitrogenase to convert inert atmospheric nitrogen to more usable forms for plants and other organisms. For a long time, scientists have known that nitrogenase produces small quantities of molecular hydrogen as a byproduct. When nitrogen is not available, the organism produces hydrogen. The team set up Cyanothece 51142 in a bioreactor, limited the supply of nitrogen, and kept the lights on 24 hours a day for several weeks. The team used an array of high-tech equipment to yield sophisticated minute-by-minute profiles of the organism as it converted light energy to hydrogen. Scientists conducted many of their analyses using capabilities at EMSL, the Environmental Molecular Sciences Laboratory, a DOE user facility at PNNL, to “interrogate” the genes and proteins of the organism as they changed while the reactions occurred. The team conducted a “multi-omics experiment,” studying the genomics, transcriptomics and proteomics of the organism's activity, as well as its reaction kinetics. The scientists scrutinized 5,303 genes and 1,360 proteins at eight separate times over the course of 48 hours as the bacteria, with limited nitrogen supply, switched on the activity of the nitrogenase protein. The scientists found that in addition to drawing upon its previously stored energy, the organism captures light and uses that energy to split water to create hydrogen in real time. As one component of the organism is creating energy by collecting light energy, another part is using that energy simultaneously to create hydrogen. In a paper published in 2012 in mBio, Alex Beliaev, one of two scientists at the Department of Energy's Pacific Northwest National Laboratory who led the research, and colleagues raised questions about how the microbe drew upon the energy required to produce hydrogen. In the new paper, the molecular signals the team studied show that photosynthesis and the hydrogen production by nitrogenase happen hand in hand in a coordinated manner. The team includes 11 researchers from PNNL. Beliaev began the project seven years ago as part of hydrogen production research related to biofuels, and Bernstein picked it up when he joined PNNL two years ago. The work was funded by the Department of Energy Office of Science (Biological and Environmental Research) and by PNNL's Laboratory Directed Research and Development Program, which funds the Linus Pauling Distinguished Postdoctoral Fellowship Program.


News Article | January 5, 2016
Site: www.greentechmedia.com

Over the past year, the Department of Energy has been putting an increased focus on technology to integrate renewable energy into everyday grid operations. This work has included opening loan guarantees to distributed renewables, grants to support field tests of distributed energy integration, and future funding plans to support the development of grid modernization technologies. Now the DOE’s blue-sky research agency, ARPA-E, is getting into the action -- or, as befits its mission, ahead of the action. Last month, ARPA-E announced $33 million in grants for its Network Optimized Distributed Energy Systems (NODES) program, meant to help 12 university, corporate and DOE laboratory projects that are trying to turn grid-edge assets into networked “virtual storage” systems. These projects are meant to “enable real-time coordination between distributed generation, such as rooftop and community solar assets, and bulk power generation, while proactively shaping electric load.” That could allow utilities to manage greater than 50 percent renewable penetration on the grid, “by developing transformational grid control methods that optimize use of flexible load and distributed energy resources.” This is not a unique concept. Distributed energy resource management software, or DERMS, platforms are being developed to tackle this challenge in one way or another, with grid giants like Siemens and Toshiba and startups such as Spirae, Enbala, Integral Analytics and Smarter Grid Solutions providing different pieces of the puzzle. Beyond that, there’s work being done by consortia such as Duke Energy’s Coalition of the Willing and Pacific Northwest National Laboratory's transactive energy pilot project to allow lots of distributed energy assets to communicate and act in concert to solve local and system-wide grid challenges. Some of the projects funded by ARPA-E’s NODES program would help support these kinds of ongoing distributed energy resource (DER) integration efforts, while others would go several steps beyond what today’s utility grid control platforms and DERs are built to handle. Here’s a short description of each project and its aims.


News Article | November 2, 2015
Site: phys.org

In what DOE is calling a pioneering regional partnership for grid modernization, Washington will host a three-campus demonstration of transaction-based energy management. This is the first time researchers will test the use of transactive controls at this scale, involving multiple buildings and devices. The transactive concept combines financial signals and dynamic control techniques to shift the timing and quantity of energy usage in devices, buildings and campuses resulting in greater efficiency and reduced energy costs, while also providing significant flexibility for the power grid. DOE's Pacific Northwest National Laboratory, Washington State University and the University of Washington are teaming on the effort, which builds on their involvement in the recently completed Pacific Northwest Smart Grid Demonstration Project. "Washington is at the cutting edge of clean energy technologies," said Washington Governor Jay Inslee. "We are proud to see PNNL leading DOE's efforts around energy storage and transactive energy control, and our state's two largest research universities working with public and private electric utilities to demonstrate how these technologies will enable grid modernization on a large scale." The state is funding infrastructure for the project, to establish an enduring test bed that will enable other research projects after completion of the first DOE-funded study, which will concentrate on transactive energy management at scale. Once perfected, load flexibility available from smart buildings can help to better integrate non-traditional energy sources like wind and stationary battery power into the grid. "It will involve a lot of communication," said Srinivas Katipamula, PNNL principle investigator and technical leader of the project. "And not just between the partners. It means equipment talking to other equipment, buildings talking to buildings and acting upon signals from the power grid." The idea is that equipment will make decisions and automatically adjust energy loads based on pre-determined criteria related to energy prices, essential services, comfort levels, time of day, etc. The project is developing the mechanisms to make that happen with minimal human direction. Devices within buildings, and the buildings and campus operations will be automatically and continually monitored to see how they are using energy. The VOLTTRON platform that enables this coordination was developed by PNNL. It acts much like a cell phone operating system that allows 'apps' to perform both information sensing and control actions in order to implement agreed upon actions for each responsive building or energy resource within each campus. With this method, both WSU and UW will manage their onsite power generation. Through the test bed, WSU researchers in the Energy Systems Innovation Center will install photovoltaic modules on the Pullman campus for the first time and integrate them into Pullman's 'Smart City' test bed and WSU's micro grid system. Experiments will be designed to show campus power generation can power critical city infrastructure in the event of a power outage. WSU will also develop strategies for sharing energy between WSU's smart buildings and the solar modules. The test bed will involve the installation of multiple smart inverters at UW, to control the power production from solar panels, and eventually feed campus-produced photovoltaic power into the power grid. UW will also add a lithium-ion battery energy storage system to increase the flexibility of the power consumption across campus, and use expertise in data analytics to analyze data and develop strategies to make buildings responsive to transactive control signals. The test bed will allow researchers to understand how adjusting loads could help local and regional grids and, one day, benefit the campuses financially. Initially, there will be no literal financial exchanges and the pricing signals will be historic and not live data, although researchers hope to get to that point. Explore further: Power grid getting smarter with big battery


News Article | December 13, 2016
Site: www.chromatographytechniques.com

Lowly sawdust, the sawmill waste that's sometimes tossed onto home garage floors to soak up oil spilled by amateur mechanics, could receive some new-found respect thanks to science. Researchers at the Department of Energy's Pacific Northwest National Laboratory have chemically modified sawdust to make it exceptionally oil-attracting and buoyant, characteristics that are ideal for cleaning oil spills in the icy, turbulent waters of the Arctic. The nontoxic material absorbs up to five times its weight in oil and stays afloat for at least four months. "Most of today's oil remediation materials are designed for warm water use," said PNNL microbiologist George Bonheyo, who leads the modified sawdust's development from PNNL's Marine Sciences Laboratory. "But as ice retreats in the Arctic Sea, fossil fuel developers are looking north, and we need new oil spill response methods that perform well in extreme conditions," added Bonheyo, who also holds a joint appointment in bioengineering with Washington State University. "The chance of an oil spill in the Arctic is real," said fellow PNNL microbiologist Robert Jeters, who is also part of the project. "We hope materials like our modified sawdust can help if an accident happens." Containing oil spills in cold waters is especially tricky, as bobbing ice chunks push oil below the water's surface, making it difficult to collect. The same goes for rough waters, whose tall, clashing waves disperse oil. The modified saw dust pulls double duty. Beyond absorbing oil, it also enhances another approach to combatting oil spills - controlled burns. If changing weather or tides move spilled oil toward a sensitive area fast, oil can be burned before it can cause further harm. Called in-situ burning, the practice can significantly reduce the amount of oil in water and minimize its adverse environmental effects. Bonheyo and his team looked to develop an environmentally friendly and inexpensive material that floats despite rough or freezing waters and can support in-situ burning. Not wanting to create more pollution if emergency responders can't retrieve oil cleanup material, Bonheyo's team considered other natural ingredients like rice hulls and silica. But they ultimately found their winner in a fine dust called wood flour. A woodworking byproduct, wood flour is often used to make wood composites. To make the dust into a thirsty oil mop, researchers chemically attach components of vegetable oil onto the material's surface. These attachments make the modified material oil-grabbing and water-shunning. The final product is a light, fluffy, bleached powder. The team is also trying out adding tiny, oil-eating microbes - fungi and bacteria - to the powder's surface so any left-behind material could naturally break down oil over time. Applying the modified sawdust is simple: sprinkle a thin layer over oil on the water's surface. The material immediately starts soaking up oil, creating a concentrated and solid slick that stays afloat thanks to the material's buoyant nature. The oil-soaked material can either be burned or retrieved. The team is using PNNL's unique Arctic simulation lab in Sequim, Washington to evaluate the material in icy waters. The facility is a customized shipping container that cools down to as low as 5 degrees Fahrenheit, which prompts researchers to don snowmobile suits and ski masks while they work. Ice slush forms on the surface of water that circulates inside a 290-gallon raceway pond placed inside the bitterly cold lab space. Oil is spilled on the slushy surface, followed by a sprinkle of modified sawdust. Tests have shown the material's water-repellent nature prevents ice from forming on it, allowing it to soak up oil and remain at the surface. Researchers are also testing how well the material performs in controlled burns. They conducted initial burns this fall at the U.S. Coast Guard and Naval Research Laboratory's Joint Maritime Test Facility near Mobile, Alabama. Burn tests continue today at PNNL's Marine Science Laboratory. Early results indicate a small amount of material enables burning of both thin and thick layers of spilled oil. In the coming months, PNNL will further evaluate the modified sawdust. The material will need additional testing and approval by multiple agencies before it can be used at actual oil spills. PNNL is developing the material for the Department of Interior's Bureau of Safety of Environmental Enforcement. BSEE is the lead federal agency charged with improving safety and ensuring environmental protection related to the offshore energy industry, primarily oil and natural gas on the U.S. Outer Continental Shelf.


News Article | November 11, 2015
Site: www.rdmag.com

In the quest for renewable fuels, scientists are taking lessons from a humble bacterium that fills our oceans and covers moist surfaces the world over. While the organism captures light to make food in a process called photosynthesis, scientists have found that it simultaneously uses the energy from that captured light to produce hydrogen. While the nuances of how microbes draw upon sunlight, water, and elements like carbon and nitrogen to survive may seem detached and remote from modern life, such knowledge is central to our ability to meet the energy needs of our planet's growing population. "The ultimate goal here is to take energy from the sun, and water, and produce useable energy," said microbiologist Alex Beliaev, one of two scientists at the Department of Energy's Pacific Northwest National Laboratory who led the research. "The more we know about the pathways involved in this process, the more likely we will be able to find a facile and economic way to produce renewable energy. The organisms that produce clean energy naturally provide a blueprint of sorts for how we might do this." The latest finding, published in Scientific Reports, concerns a cyanobacterium known as Cyanothece 51142, a type of bacteria also called blue-green algae that produces hydrogen — a resource that is one focus of the worldwide push toward renewable energy. PNNL scientists found that the organism taps into an unexpected source of energy to create hydrogen. Researchers have known that 51142 makes hydrogen by drawing upon sugars that it has stored during growth. In this study, PNNL researchers found that the organism also draws on a second source of energy, using sunlight and water directly to make hydrogen. Cyanobacteria: Central to life and energy production Organisms like cyanobacteria made life on the planet possible by producing the oxygen for our atmosphere 2.3 billion years ago. They also convert the abundant nitrogen in our atmosphere to a form that is essential for all plant life on the planet. "If we want to understand life on Earth, and how to improve it, this is a great place to start," said first author Hans Bernstein, a Linus Pauling distinguished postdoctoral fellow at PNNL. Bernstein and Beliaev are co-authors on the paper. Many of these organisms are equipped with an enzyme called nitrogenase to convert inert atmospheric nitrogen to more usable forms for plants and other organisms. For a long time, scientists have known that nitrogenase produces small quantities of molecular hydrogen as a byproduct. When nitrogen is not available, the organism produces hydrogen. It's this attribute of the enzyme that scientists like Bernstein and Beliaev focus on. The team set up Cyanothece 51142 in a bioreactor, limited the supply of nitrogen, and kept the lights on 24 hours a day for several weeks. The team used an array of high-tech equipment to yield sophisticated minute-by-minute profiles of the organism as it converted light energy to hydrogen. Scientists conducted many of their analyses using capabilities at EMSL, the Environmental Molecular Sciences Laboratory, a DOE user facility at PNNL, to "interrogate" the genes and proteins of the organism as they changed while the reactions occurred. In scientific parlance, the team conducted a "multi-omics experiment," studying the genomics, transcriptomics and proteomics of the organism's activity, as well as its reaction kinetics. The scientists scrutinized 5,303 genes and 1,360 proteins at eight separate times over the course of 48 hours as the bacteria, with limited nitrogen supply, switched on the activity of the nitrogenase protein. Scientists found that in addition to drawing upon its previously stored energy, the organism captures light and uses that energy to split water to create hydrogen in real time. As one component of the organism is creating energy by collecting light energy, another part is using that energy simultaneously to create hydrogen. Robust hydrogen production Scientists know that the organism is a robust producer of hydrogen, creating the resource at a rate higher than other known natural systems. "This organism can make lots of hydrogen, very fast; it's a viable catalyst for hydrogen production," said Bernstein. "The enzyme that makes the hydrogen needs a huge amount of energy. The real question is, what funds the energy budget for this important enzyme and then, how can we design and control it to create renewable fuels and to advance biotechnology?" In a paper published in 2012 in mBio, Beliaev and colleagues raised questions about how the microbe drew upon the energy required to produce hydrogen. In the new paper, the molecular signals the team studied show that photosynthesis and the hydrogen production by nitrogenase happen hand in hand in a coordinated manner. The team includes 11 researchers from PNNL. Beliaev began the project seven years ago as part of hydrogen production research related to biofuels, and Bernstein picked it up when he joined PNNL two years ago. "Our primary goal is to understand the fundamental processes that occur in nature, so that we can learn to design and control complex biological systems to sustain healthy people on a healthy planet," added Bernstein. The work was funded by the Department of Energy Office of Science (Biological and Environmental Research) and by PNNL's Laboratory Directed Research and Development Program, which funds the Linus Pauling Distinguished Postdoctoral Fellowship Program.


News Article | December 13, 2016
Site: www.rdmag.com

Lowly sawdust, the sawmill waste that's sometimes tossed onto home garage floors to soak up oil spilled by amateur mechanics, could receive some new-found respect thanks to science. Researchers at the Department of Energy's Pacific Northwest National Laboratory have chemically modified sawdust to make it exceptionally oil-attracting and buoyant, characteristics that are ideal for cleaning oil spills in the icy, turbulent waters of the Arctic. The nontoxic material absorbs up to five times its weight in oil and stays afloat for at least four months. "Most of today's oil remediation materials are designed for warm water use," said PNNL microbiologist George Bonheyo, who leads the modified sawdust's development from PNNL's Marine Sciences Laboratory. "But as ice retreats in the Arctic Sea, fossil fuel developers are looking north, and we need new oil spill response methods that perform well in extreme conditions," added Bonheyo, who also holds a joint appointment in bioengineering with Washington State University. "The chance of an oil spill in the Arctic is real," said fellow PNNL microbiologist Robert Jeters, who is also part of the project. "We hope materials like our modified sawdust can help if an accident happens." Containing oil spills in cold waters is especially tricky, as bobbing ice chunks push oil below the water's surface, making it difficult to collect. The same goes for rough waters, whose tall, clashing waves disperse oil. The modified saw dust pulls double duty. Beyond absorbing oil, it also enhances another approach to combatting oil spills - controlled burns. If changing weather or tides move spilled oil toward a sensitive area fast, oil can be burned before it can cause further harm. Called in-situ burning, the practice can significantly reduce the amount of oil in water and minimize its adverse environmental effects. Bonheyo and his team looked to develop an environmentally friendly and inexpensive material that floats despite rough or freezing waters and can support in-situ burning. Not wanting to create more pollution if emergency responders can't retrieve oil cleanup material, Bonheyo's team considered other natural ingredients like rice hulls and silica. But they ultimately found their winner in a fine dust called wood flour. A woodworking byproduct, wood flour is often used to make wood composites. To make the dust into a thirsty oil mop, researchers chemically attach components of vegetable oil onto the material's surface. These attachments make the modified material oil-grabbing and water-shunning. The final product is a light, fluffy, bleached powder. The team is also trying out adding tiny, oil-eating microbes - fungi and bacteria - to the powder's surface so any left-behind material could naturally break down oil over time. Applying the modified sawdust is simple: sprinkle a thin layer over oil on the water's surface. The material immediately starts soaking up oil, creating a concentrated and solid slick that stays afloat thanks to the material's buoyant nature. The oil-soaked material can either be burned or retrieved. The team is using PNNL's unique Arctic simulation lab in Sequim, Washington to evaluate the material in icy waters. The facility is a customized shipping container that cools down to as low as 5 degrees Fahrenheit, which prompts researchers to don snowmobile suits and ski masks while they work. Ice slush forms on the surface of water that circulates inside a 290-gallon raceway pond placed inside the bitterly cold lab space. Oil is spilled on the slushy surface, followed by a sprinkle of modified sawdust. Tests have shown the material's water-repellent nature prevents ice from forming on it, allowing it to soak up oil and remain at the surface. Researchers are also testing how well the material performs in controlled burns. They conducted initial burns this fall at the U.S. Coast Guard and Naval Research Laboratory's Joint Maritime Test Facility near Mobile, Alabama. Burn tests continue today at PNNL's Marine Science Laboratory. Early results indicate a small amount of material enables burning of both thin and thick layers of spilled oil. In the coming months, PNNL will further evaluate the modified sawdust. The material will need additional testing and approval by multiple agencies before it can be used at actual oil spills. PNNL is developing the material for the Department of Interior's Bureau of Safety of Environmental Enforcement. BSEE is the lead federal agency charged with improving safety and ensuring environmental protection related to the offshore energy industry, primarily oil and natural gas on the U.S. Outer Continental Shelf.


News Article | December 12, 2016
Site: phys.org

Researchers at the Department of Energy's Pacific Northwest National Laboratory have chemically modified sawdust to make it exceptionally oil-attracting and buoyant, characteristics that are ideal for cleaning oil spills in the icy, turbulent waters of the Arctic. The nontoxic material absorbs up to five times its weight in oil and stays afloat for at least four months. "Most of today's oil remediation materials are designed for warm water use," said PNNL microbiologist George Bonheyo, who leads the modified sawdust's development from PNNL's Marine Sciences Laboratory. "But as ice retreats in the Arctic Sea, fossil fuel developers are looking north, and we need new oil spill response methods that perform well in extreme conditions," added Bonheyo, who also holds a joint appointment in bioengineering with Washington State University. "The chance of an oil spill in the Arctic is real," said fellow PNNL microbiologist Robert Jeters, who is also part of the project. "We hope materials like our modified sawdust can help if an accident happens." Containing oil spills in cold waters is especially tricky, as bobbing ice chunks push oil below the water's surface, making it difficult to collect. The same goes for rough waters, whose tall, clashing waves disperse oil. The modified saw dust pulls double duty. Beyond absorbing oil, it also enhances another approach to combatting oil spills - controlled burns. If changing weather or tides move spilled oil toward a sensitive area fast, oil can be burned before it can cause further harm. Called in-situ burning, the practice can significantly reduce the amount of oil in water and minimize its adverse environmental effects. Bonheyo and his team looked to develop an environmentally friendly and inexpensive material that floats despite rough or freezing waters and can support in-situ burning. Not wanting to create more pollution if emergency responders can't retrieve oil cleanup material, Bonheyo's team considered other natural ingredients like rice hulls and silica. But they ultimately found their winner in a fine dust called wood flour. A woodworking byproduct, wood flour is often used to make wood composites. To make the dust into a thirsty oil mop, researchers chemically attach components of vegetable oil onto the material's surface. These attachments make the modified material oil-grabbing and water-shunning. The final product is a light, fluffy, bleached powder. The team is also trying out adding tiny, oil-eating microbes - fungi and bacteria - to the powder's surface so any left-behind material could naturally break down oil over time. Applying the modified sawdust is simple: sprinkle a thin layer over oil on the water's surface. The material immediately starts soaking up oil, creating a concentrated and solid slick that stays afloat thanks to the material's buoyant nature. The oil-soaked material can either be burned or retrieved. The team is using PNNL's unique Arctic simulation lab in Sequim, Washington to evaluate the material in icy waters. The facility is a customized shipping container that cools down to as low as 5 degrees Fahrenheit, which prompts researchers to don snowmobile suits and ski masks while they work. Ice slush forms on the surface of water that circulates inside a 290-gallon raceway pond placed inside the bitterly cold lab space. Oil is spilled on the slushy surface, followed by a sprinkle of modified sawdust. Tests have shown the material's water-repellent nature prevents ice from forming on it, allowing it to soak up oil and remain at the surface. Researchers are also testing how well the material performs in controlled burns. They conducted initial burns this fall at the U.S. Coast Guard and Naval Research Laboratory's Joint Maritime Test Facility near Mobile, Alabama. Burn tests continue today at PNNL's Marine Science Laboratory. Early results indicate a small amount of material enables burning of both thin and thick layers of spilled oil. In the coming months, PNNL will further evaluate the modified sawdust. The material will need additional testing and approval by multiple agencies before it can be used at actual oil spills. PNNL is developing the material for the Department of Interior's Bureau of Safety of Environmental Enforcement. BSEE is the lead federal agency charged with improving safety and ensuring environmental protection related to the offshore energy industry, primarily oil and natural gas on the U.S. Outer Continental Shelf. Explore further: Eco-friendly method for extracting xylan from birch developed


News Article | December 15, 2015
Site: www.rdmag.com

Some of the natural gas harvested by hydraulic fracturing operations may be of biological origin--made by microorganisms inadvertently injected into shale by oil and gas companies during the hydraulic fracturing process, a new study has found. The study suggests that microorganisms including bacteria and archaea might one day be used to enhance methane production--perhaps by sustaining the energy a site can produce after fracturing ends. The discovery is a result of the first detailed genomic analysis of bacteria and archaea living in deep fractured shales, and was made possible through a collaboration among universities and industry. The project is also yielding new techniques for tracing the movement of bacteria and methane within wells. Researchers described the project's early results on Monday, Dec. 14, at the American Geophysical Union meeting in San Francisco. "A lot is happening underground during the hydraulic fracturing process that we're just beginning to learn about," said principal investigator Paula Mouser, assistant professor of civil, environmental and geodetic engineering at The Ohio State University. "The interactions of microorganisms and chemicals introduced into the wells create a fascinating new ecosystem. Some of what we learn could make the wells more productive." Oil and gas companies inject fluid--mostly water drawn from surface reservoirs--underground to break up shale and release the oil and gas--mostly methane--that is trapped inside. Though they've long known about the microbes living inside fracturing wells--and even inject biocides to keep them from clogging the equipment--nobody has known for sure where the bacteria came from until now. "Our results indicate that most of the organisms are coming from the input fluid," said Kelly Wrighton, assistant professor of microbiology and biophysics at Ohio State. "So this means that we're creating a whole new ecosystem a mile below the surface. Not only are we fracturing the rock, we're giving these organisms a new place to live and food to eat. And in fact, the biocides that we add to inhibit their growth may actually be fueling the production of methane." That is, the biocides kill some types of bacteria, thus enabling other bacteria and archaea to prosper--species that somehow find a way to survive in water that is typically four times saltier than the ocean, and under pressures that are typically hundreds of times higher than on the surface of the earth. Deprived of light for photosynthesis, these hardy microorganisms adapt in part by eating chemicals found in the fracturing fluid and producing methane. Next, the researchers want to pinpoint exactly how the bacteria enter the fracturing fluid. It's likely that they normally live in the surface water that makes up the bulk of the fluid. But there's at least one other possibility, Wrighton explained. Oil and gas companies start the fracturing process by putting fresh water into giant blenders, where chemicals are added. The blenders are routinely swapped between sites, and sometimes companies re-use some of the well's production fluid. So it's possible that the bacteria live inside the equipment and propagate from well to well. In the next phase of the study, the team will sample site equipment to find out. The clues emerged when the researchers began using genomic tools to construct a kind of metabolic blueprint for life living inside wells, Wrighton explained. "We look at the fluid that comes out of the well," she said. "We take all the genes and enzymes in that fluid and create a picture of what the whole microbial community is doing. We can see whether they survive, what they eat and how they interact with each other." The Ohio State researchers are working with partners at West Virginia University to test the fluids taken from a well operated by Northeast Natural Energy in West Virginia. For more than a year, they've regularly measured the genes, enzymes and chemical isotopes in used fracturing fluid drawn from the well. Within around 80 days after injection, the researchers found, the organisms inside the well settle into a kind of food chain that Wrighton described this way: Some bacteria eat the fracturing fluid and produce new chemicals, which other bacteria eat. Those bacteria then produce other chemicals, and so on. The last metabolic step ends with certain species of archaea producing methane. Tests also showed that initially small bacterial populations sometimes bloom into prominence underground. In one case, a particular species that made up only 4 percent of the microbial life going into the well emerged in the used fracturing fluid at levels of 60 percent. "In terms of the resilience of life, it's new insight for me into the capabilities of microorganisms." The researchers are working to describe the nature of pathways along which fluids migrate in shale, develop tracers to track fluid migration and biological processes, and identify habitable zones where life might thrive in the deep, hot terrestrial subsurface. For example, Michael Wilkins, assistant professor of earth sciences and microbiology at Ohio State, leads a part of the project that grows bacteria under high pressure and high temperature conditions. "Our aim is to understand how the microorganisms operate under such conditions, given that it's likely they've been injected from surface sources, and are accustomed to living at much lower temperatures and normal atmospheric pressure. We're also hoping to see how geochemical signatures of microbial activity, such as methane isotopes, change in these environments," Wilkins said. Other aspects of the project involve studying how liquid, gas and rock interact underground. In Ohio State's Subsurface Materials Characterization and Analysis Laboratory, Director David Cole models the geochemical reactions taking place inside shale wells. The professor of earth sciences and Ohio Research Scholar is uncovering reaction rates for the migration of chemicals inside shale. Using tools such as advanced electron microscopy, micro-X-ray computed tomography and neutron scattering, Cole's group studies the pores that form inside shale. The pores range in size from the diameter of a human hair to many times smaller, and early results suggest that connections between these pores may enable microorganisms to access food and room to grow. Yet another part of the project involves developing new ways to track the methane produced by the bacteria, as well as the methane released from shale fracturing. Thomas Darrah, assistant professor of earth sciences, is developing computer models that trace the pathways fluids follow within the shale and within fracturing equipment. Though oil and gas companies may not be able to take full advantage of this newly discovered methane source for some time, Wrighton pointed out that there are already examples of bio-assisted methane production in industry, particularly in coal bed methane operations. "Hydraulic fracturing is a young industry," she said. "It may take decades, but it's possible that biogenesis will play a role in its future. Other researchers on the project hail from Pacific Northwest National Laboratory and the University of Maine.


News Article | December 8, 2015
Site: news.yahoo.com

Alan Brown, writer and editor for The Kavli Foundation, edited this roundtable for Live Science's Expert Voices: Op-Ed & Insights. Microbes make life on Earth possible, yet we know so little about them. Now, a team of scientists aim to change that through an ambitious effort — with researchers from 50 institutions — called the Unified Microbiome Initiative. Their goal is to develop next-generation technologies to unlock the secrets of microbiomes, complex ecosystems of microorganisms — from bacteria and fungi to algae and viruses — that inhabit nearly every square inch of the planet and have densely colonized our bodies. Doing so, the scientists argue, could improve human health and the environment. Harnessing microbiomes could cure disease, reduce resistance to antibiotics, rejuvenate depleted farmland, moderate fertilizer and pesticide use, and convert sunlight into useful chemicals. But to achieve that, scientists will need a new generation of research tools to take them beyond just cataloging the members of these microbial communities, which may contain tens or even hundreds of thousands of individual species. Researchers need instruments to further study microbial genomes and the chemical signals microorganisms use to communicate, as well as new computer science tools to analyze the data these techniques produce. On October 27, The Kavli Foundation spoke with three of the scientists who authored the Unified Microbiome Initiative proposal, which appeared the next day in the journal Science. Rob Knight is the founder of the American Gut Project, an open-access project to survey the digestive system's microbiome and its effect human health and development. He holds appointments at the University of California, San Diego, School of Medicine and Department of Computer Science and Engineering, where he develops bioinformatics systems to classify and interpret large sets of biological data. Janet Jansson is chief scientist of biology in the Earth and Biological Sciences Directorate at Pacific Northwest National Laboratory (PNNL) and sector lead for PNNL research in the U.S. Department of Energy (DOE) Biological Systems Science Division. She coordinates two of PNNL's biology programs, including the Microbiomes in Transition (MinT) initiative to study how climate and environmental changes impact natural and human microbiomes and the DOE Foundational Scientific Focus Area. Jeff Miller is director of the California NanoSystems Institute, a multidisciplinary research organization, and the corresponding author of the consortium's Science paper. Based at University of California, Los Angeles, Miller is the chair in NanoSystems Sciences and is a professor of Microbiology, Immunology & Molecular Genetics. The following is an edited transcript of their roundtable discussion. The participants have been provided the opportunity to amend or edit their remarks. The Kavli Foundation: Let's start with the obvious question: Why is there so much interest in the microbiome right now? Janet Jansson: We live in a microbial world. In fact, we are more microbial than human. We have about 10 times more microbial cells in and on our bodies than we have human cells, and those microbes encode about 100 times more genetic information than our human DNA. Microbes are also everywhere in the environment, where they carry out such important processes as cycling carbon and other nutrients, promoting plant growth, and preventing disease. Jeff Miller: Microbiomes also have an enormous impact on the environment. Janet's work on permafrost, the Arctic's permanently frozen subsurface soil, shows that. As the climate warms, the metabolism of microbes in the permafrost will speed up. One of the big questions is whether they will begin converting vast amounts of carbon in the permafrost into carbon dioxide, methane and other greenhouse gases. At a time when we are talking about Middle Eastern cities becoming too hot to inhabit by the end of the century, understanding how those microbiomes influence climate is important. Also, as Janet noted, we have 100 to 150 times more microbial genes than human genes in our bodies. Changing our own genome is a daunting prospect. But we can change our diet to alter our microbiome. Rob Knight: That is true. For many aspects of who we are, microbial genes may be even more important than our human genes. For example, we can tell if you're lean or obese with 90 percent accuracy based on your microbial genes, but with only about 58 percent accuracy based on your human genes. So the three pounds of microbes you have inside your gut may be more important for some of your traits than every gene in your genome. Moreover, we're born with our human genes, but our microbes continue to change over the course of our life. If we're able to take control of these changes, whether within our bodies or over our entire planet, we could have a huge impact on many of the problems facing us as individuals and a society. [Friends for Life: How Good Bugs Keep You Healthy (Op-Ed)] J.M.: Some of today's health mysteries might have a link to the microbiome. Why has asthma increased so dramatically over the last 50 years? Why is obesity such a problem? What about metabolic syndrome, type 2 diabetes, inflammatory bowel disease, autism and other conditions? There are so many unknowns that are likely to have a tie to the microbiome and its interaction with the environment. TKF: Microbiomes are clearly important, yet we weren't talking about them 10 years ago. What's changed and why is this the right time for the Unified Microbiome Initiative? J.J.: I was trained as a soil microbial ecologist, and we never used to call these soil communities a "microbiome." But we do now. It's a term coined by clinical microbiologists, and it originated with the advent of "high-throughput" genome sequencing technology. This is something Rob can discuss in detail. R.K.: Right. DNA sequencing has got a million times cheaper — literally — over the last 15 years. High-speed automated equipment can speed read a genome for less than $1,000. This has really catalyzed our ability to discover patterns in microbial communities. Yet we are far less able to understand how those microbes function — what they provide or add to their community. What we need next is a game-changing technological advance that increases our ability to read out microbial functions at different scales. Those might range from the inside of one cell up to the size of our entire planet, using satellites and other remote sensing technologies, for example. We want to catalyze the next series of tools to fully realize the potential of the microbiome for healthcare, agriculture and environmental applications. We are calling for a unified initiative to bring together different fields of research, government agencies, private enterprise and private foundations to make that possible. TKF: Let's delve a little deeper into the impact of advances in genome sequencing. Has it changed the way we think about microbial communities? J.J.: In the past, we did not fully understand the complexity and richness of microbiomes, and we were limited because we could not grow the majority of bacteria in a lab, and so they were hard to study. Now, because of the advances in sequencing, we can classify the composition of these communities based on sequence information. This has led to the discovery of hundreds of new bacterial phyla, large groups of related life forms, many times more phyla than all the phyla of multicellular animals in the world. That gives us a window, for the first time, into who is there. But as Rob was saying, in most cases we don't know what they're doing. That's what the next stage of technology would do, let us tackle their functions. J.M.: Knowing who's there is really complicated, because microbiomes differ from person to person and even for a given person, depending on time, environment, life events and other factors. Understanding what constitutes a normal human microbiome is enormously complex, especially since communities may have similar properties but different compositions. All this raises the question, "What is a healthy microbiome?" R.K.: There is no one healthy microbiome, but, rather, there are many different healthy microbiomes. The problem is figuring out how to get a handle on all that diversity. We can collect lots of samples and quantify the differences in one person's microbiome over time, between different people, and between people with different ethnic backgrounds, environmental exposures, and medical conditions. We're moving rapidly towards understanding which changes in the microbiome really matter, especially for health, and which changes are more or less random variations. With so much data, we need machine learning and other high-end statistical techniques to try to make sense of the vast flood of data that we're getting from DNA sequencing and from other techniques, such as mass spectrometry, which measures proteins and chemicals. TKF: As our understanding increases, are researchers rethinking how we might harness the potential of microbiomes? J.J.: Yes. For example, we hope to take advantage of each person's unique microbiome to produce more personalized medicine. We want to understand how the way your microbiome metabolizes medications differs from your neighbor's microbiome. For example, one person's microbiome might have an adverse reaction to a specific drug, while another's does not. J.M.: Actually, digoxin is a perfect example of what Janet is talking about. It is a heart drug that can be metabolized and destroyed by certain microbes that live in some human gastrointestinal microbiomes but not others. Also, over the past two or three years, we have seen the first medical intervention for a serious disease that is based on crude, though extremely effective, microbiome engineering: fecal transplant therapy for colitis, an inflammation of the large intestine caused by the bacterium Clostridium difficile, which is normally excluded by our gut microbiomes. This is how it works: We excrete part of our microbiome with our feces. So a fecal sample is taken from someone with a "healthy" gastrointestinal microbiome, processed, and infused into someone who lacks a protective microbiota in their gut and has C. difficile disease. The treatment is between 85 and 95 percent effective for recurrent disease, compared with 20 to 30 percent for the very best antibiotics that we have. This is actually the first proof of principle that we can manipulate microbiomes in a very deliberate way to treat a serious human disease. TKF: The Unified Microbiome Initiative calls for bold research to develop transformative tools. Instead of talking only with microbiome experts, you put that agenda together with physicists, engineers, chemists and computer scientists. What did they contribute? J.J.: What's important here, at least to me, is that a community composed of many different disciplines realizes the importance of the microbiome and is calling for us to do something on a grand scale. For example, I've been advocating for improved mass spectrometry to get higher throughput measurements of proteins and metabolites, the molecules microbes use to interact with their environment. We also need better databases, so we can understand how those molecules function in a spatial context. And we need improved imaging technologies. I need all of these things to study soil microbiomes, which I usually refer to as the worst-case scenario. It's one of the most diverse microbial environments. The cells live in dense communities and aggregate around soil particles and pores. We can tell what kind of microorganisms are there by sequencing their genes, but we lose all that spatial information about where they live in the soil matrix. It is a really difficult habitat to study, but an extremely interesting and important one. R.K.: Physicists bring quantitative techniques they have perfected for understanding dynamical systems. Engineers want to use that knowledge to control and manipulate the microbiome to achieve particular results. And, as Janet noted, they are the ones who will develop new technologies to read out the microbiome better, faster, cheaper, more precisely and on different scales. J.M.: Exactly. And while, as Rob mentioned, the quantitative sciences are extremely important, we are also going to need people to commercialize these discoveries, as well as ethicists and legal experts. J.M.: Whenever we manipulate something in an animal or a human being, we need to consider the ethical issues. But the idea of potentially engineering Earth's microbial ecosystems raises very legitimate questions. The prospect for doing harm is there. With something so complex and so dynamic, we need to ensure we understand it well enough to justify that manipulation. It is an exciting prospect, and also a somewhat daunting one. R.K.: There are also intellectual property considerations. For example, if we isolate a microbe from your body, do you own it? Does it matter if it is unique to you, or if millions of other people share that same strain? Similarly, do you own the microbes in your home, in the soil of your garden, and on your plants? If researchers begin to extract commercial value from the microbiome, we need to pay much more attention to those issues. J.J.: Then there is an issue of personal microbiome integrity. Our microbiomes are like fingerprints, and some researchers are studying them for forensic applications. Will this have the potential to infringe on our own personal identity, and how do we protect our identities if it does? That's an issue to consider. TKF: To what extent are our microbiomes part of our identities? R.K.: That's a really fascinating question. For example, many people attribute obesity to lack of willpower or some other intrinsic feature of the person. But what if it's primarily based on your microbes rather than on your ability to resist that extra slice of chocolate cake? There is also new evidence that the microbiome might determine whether you are depressed or happy, or have certain forms of mental illness, or even whether you prefer one food to another. Where is the boundary between what's an intrinsic attribute of "you" versus what's an attribute you "have" based on your microbes? Philosophers and ethicists are going to have a lot to discuss, and valuable contributions to make. J.M.: That's why we have to be really careful about manipulating our microbiomes, so we don't create pathological situations. TKF: Let's switch gears for a moment. Rob, what has surprised you most over the past decade of microbiome research? R.K.: Remember, 10 years ago, microbes hadn't been linked to any of the things we now know they're involved in, such as obesity, allergies, depression and brain development. While the links between the microbiome and metabolism have certainly been very surprising, what surprised me the most has been the links between the microbiome and behavior. This was not even on the radar 10 years ago. TKF: Could you give us an example? R.K.: Yes. Paul Patterson, Sarkis Mazmanian and Elaine Hsaio of Caltech injected pregnant female mice with RNA to simulate a viral attack, and their pups were born with behaviors characteristic of autism in humans, such as cognitive and communications deficits and compulsive behaviors. They then treated them with microbes isolated from the human gut and cured many of those symptoms. They then introduced a chemical isolated from the mother mouse's microbiome and the symptoms reappeared. My research group is working with researchers at the University of Colorado to test the ability of microbes to inoculate mice against social stress. While the links between the microbiome and human behavior are much less clear, the fact that we can find these links in mice establishes that there is a plausible biological mechanism. It certainly motivates the human research. TKF: Jeff, you study microbial evolution and diseases. Will Rob's research help move your work forward? J.M.: I'm kind of the outsider here, since I study the molecular mechanisms by which bacteria cause infection. Yet I'm interested in how the microbiome changes how resident and incoming disease-causing organisms behave. I'm also interested in some of the technologies that could arise from the Unified Microbiome Initiative. Precision antibiotics is one example. One of the problems with drug resistance is that we use broad-spectrum antibiotics that harm beneficial microbes as they kill disease-causing pathogens. Any microbes that survive pass on their antibiotic resistance. Now, the Unified Microbiome Initiative Consortium is interested in therapeutics that will specifically target one and only one species or strain, so researchers can run experiments to see how our complex microbiota functions without them. But we could use those same reagents to treat infectious diseases, perhaps preventing some of the consequences of broad-spectrum antibiotic use. TKF: You see a link, Jeff. But so far, we've only talked about the gut. What about you, Janet? Is this work going on in the human gut relevant to your studies of microbiomes in permafrost and on beaches after oil spills? J.J.: Compared to what we have learned about human microbiomes over the past decade, we are further behind in understanding complex environmental microbiomes. Those answers are important because we don't understand how our climate will change when those permafrost microbes begin to warm up. We need to know if that microbiome is going to pump greenhouse gases into the atmosphere or store them in the soil. But going back to what Jeff was talking about, once we understand these environmental processes, we would want to design microbial communities that could fill an environmental function. I see that as a future goal, but we first need to understand how those interactions work in nature. We don't know that yet. TKF: For your research, what types of tools are your priorities? TKF: When you say 'omics, you mean something more than just genomics, right? J.J.: So specifically, I mean high-throughput proteomics and metabolomics, tools that measure the proteins and the small molecules produced by cells and used for their communication. Also, I need better databases and algorithms to store and interpret the data this equipment produces. They're parallel concerns, and they are both huge bottlenecks right now. J.M.: I'm a molecular biologist, and I like to study molecular mechanisms. I've been waiting for tools that not only characterize the organisms in microbiomes, but run controlled tests to see how they behave when we change only one variable at a time. We need a way to visualize dynamic communities living in their normal habitat, with their complexity preserved and with minimal perturbation. We also need to observe them over a time scale that lets us see who is there and how they interact with one another and with their environment. Technologies that work precisely to delete or add organisms to a microbiome, or change their genes without having to cultivate them would be enormously valuable. Developing those precision tools appeals to me from the perspective of pure science, and I believe they will eventually enable us to manipulate microbiomes to achieve beneficial outcomes. R.K.: I agree with Janet, we need better algorithms to interpret the data. We can already survey the genomes of organisms in a microbiome to see who is there. You could imagine improving those algorithms to capture more spatial data over time, so we understand which microbes are influencing the behavior of others, and what this looks like in a living environment. J.J.: My team is actually working with Rob, and we have different kinds of data sets. When you're dealing with millions of genes and thousands of proteins and hundreds of thousands of metabolites, it's challenging to integrate all that data in ways that provide a picture of what's really happening in the microbiome. TKF: So you are interested in tracking chemical communications? J.J.: I've mentioned tracking metabolites and proteins, but our goal is to understand how microbes occupy different metabolic niches and then communicate with other microbes to get their needs met. When I first heard Jeff talk about his research, I started thinking about some of the networks and keystone species that we see. I had an "ah-ha" moment, and realized we could use some of Jeff's tools to knock out different nodes in these networks in order to test some of our hypotheses. I wouldn't have thought about it if I hadn't met Jeff. TKF: So just working on this proposal with Jeff and other researchers has changed how you might do research? J.J.: Absolutely. I mean, I've been feeling like a kid in a candy store. It's been fantastic. J.M.: I think that's a trend in science in general. As we break out of our silos, we realize that there's so much more to be gained by interacting with colleagues in areas that you might not have interfaced with before. TKF: The Unified Microbiome Initiative proposes an ambitious tool development agenda for the next 10 years. Where do you think it will lead? R.K.: I think we'll have much better ways to diagnose disease and perhaps new therapeutics for the large number of microbiome-related diseases. I believe we're going to develop very general technologies that impact a broad range of different microbial processes and interactions. I think we'll make substantial progress towards harnessing microbes to improve industrial processes in the energy sector and to remediate depleted farmland. J.J.: If we're looking out 10 years, I would like to work on developing better data on vulnerable microbial ecosystems. I want to know how they react when we reach a tipping point, such as a permafrost thaw or rising seawater levels, so we can predict the impacts of climate change. I'm also interested in designer diets. This is a personal interest. Our whole family got our microbiomes sequenced. We got a family discount, and it only cost something like $49.99 per person. So when we got our microbiomes back, we noticed that all of us fell into the normal range, except one of my daughters. She has a lot Firmicute bacteria, which make it harder for her to maintain her weight. While she looks great, she has to think about it more than the rest of us. On the other hand, if she ever has a problem, she can always say, "It's not me, mom, it's my Firmicutes." J.M.: Isn't the cure to that to eat complex sugars? J.J.: Right, but her microbes don't want to eat them. Her microbiome is sending signals to her brain that they don't want to eat that. They want to have bread and butter. This is a practical application of how we should think about modifying our microbiomes, and I think that designer diets to achieve different types of results could be possible within a 10-year horizon. TKF: What about the next 10 years for you, Jeff? J.M.: Within five years, I think it's reasonable to expect to have precision antimicrobials for bacteria that cause tooth decay and periodontal disease. We can also start to get a handle on how to prevent infectious diseases of immunosuppressed patients in hospitals. For people getting an organ or bone marrow transplant, for example, we suppress their immune systems and put them on antibiotics. Some studies show that if we look at the microbiome of their stool, using the same $49 technique Janet used to sequence her family's microbiomes, we can get predictive, actionable information on the bacteria  that are likely to cause serious bloodstream infections before those infections occur. If we can combine that with precision antimicrobials, we might be able to deal with the threat without disrupting their beneficial microbiota. Agriculture's another area we haven't talked about yet, but microbiomes have a major influence on plant yield, water usage, carbon availability and sequestration. We would like to use less fertilizer and fewer pesticides, and to grow crops in regions impacted by climate change. It's hard to say whether that is five, 10, or 15 years off, but they seem like tractable problems. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science. Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | February 28, 2017
Site: globenewswire.com

TORONTO, Feb. 28, 2017 (GLOBE NEWSWIRE) -- Golden Share (TSX-V:GSH) is pleased to announce the successful trial production of the Licensed Vanadium Electrolyte (VE) which was developed by Pacific Northwest National Laboratory (“PNNL”) of the United States Department of Energy. This VE product has advantages over previous generations, including a wider temperature-operating range and higher energy density. (Please refer to the Press Release dated October 18, 2016 for details.) The trial production of this VE product represents a milestone in the development of Golden Share’s strategic partnership with Northwest Mining & Exploration Group Co., Ltd. for Nonferrous Metals (“NWME”). NWME owns and operates the largest primary vanadium mine in China since initial production in 2011. (Please refer to the Press Release dated May 24, 2016 for details.) “While announcing that the Licensed Vanadium Electrolyte is ready for commercial applications, we are excited for this recent development which is a major step closer to Golden Share’s goal to become a preferred supplier of vanadium electrolyte. We are very proud of the scientists, technicians and all associates at NWME and appreciate their hard and efficient work.” Nick Zeng, the President and CEO stated, “We would like to work with all Vanadium Redox Flow Battery (VRFB) manufacturers together to make VRFB to be the preferred solution for utility scale energy storage.” A reliable and cost effective long hours energy storage solution is poised to play a pivotal role in the future developments of renewable energy industry. Golden Share believes VRFB is a better solution among the available energy storage technologies on the market. Golden Share will now proceed to select VRFB manufacturers as future partners to test the Licensed VE as part of pilot projects potentially developed by Golden Share. Interdisciplinary teams at Pacific Northwest National Laboratory (PNNL) address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science.  PNNL employs 4,400 staff, has an annual budget of nearly $1 billion, and has been managed for the U.S. Department of Energy by Ohio-based Battelle since the laboratory's inception in 1965. Northwest Mining & Exploration Group Co., Ltd. For Nonferrous Metals (NWME) is a large Chinese State-owned Enterprise. NWME has invested more than 20 mines involving gold, silver, copper, lead, zinc, vanadium, molybdenum and etc. through exploration and development in China and overseas. The Qianjiaping Vanadium Mine (Qianjiaping) of NWME is located in Shaanxi Province, China. Qianjiaping was put into production in 2011, is an environmental friendly vanadium mine with complete production, management and safety systems. Golden Share Mining Corporation is a Canadian junior mining company focusing on exploration in Ontario, the politically stable jurisdiction with a history of rich mineral endowment. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release.


News Article | August 22, 2016
Site: cen.acs.org

Sometimes chemists set themselves up for a surprise. Following sets of experiments in which something doesn’t happen and doesn’t seem likely to happen, they soon believe it never will. Until it does. Chemists have traditionally thought of cyclopentadienyl ligands as being “innocent,” which means they offer electronic support to a metal catalyst but generally don’t do anything chemically. The two groups were studying reactions involving Cp*Rh(bipyridine), often used in hydrogenation reactions and in hydrogen-forming reactions, when they found that the expected metal hydride intermediate was followed by formation of an unexpected intermediate in which the hydrogen had migrated to one of the carbon atoms in the Cp* ring. “These two reports showing that the seemingly innocent Cp* ligand can reversibly form a C–H bond by proton transfer from rhodium hydride are remarkable,” comments chemistry professor David Milstein of the Weizmann Institute of Science, who was not involved in the research. “Considering the ubiquity of cyclopentadienyl metal complexes in homogeneous catalysis, this pathway should be seriously considered in the design and understanding of reactions in which proton/hydride transfer may be involved.” Alexander J. M. Miller of the University of North Carolina, Chapel Hill, who led one of the teams, says chemists had previously worked out mechanisms involving hydride intermediates that made sense and thought the story ended there. But they did not exercise due diligence and poke around enough to see that a protonated Cp* intermediate, denoted Cp*H, could be involved as well. “What’s more surprising,” Miller points out, “the Cp*H complex is not a dead end. This diene complex is still an active catalyst.” Miller’s group came across the Cp*H intermediate while investigating hydride transfer reactions with the cellular enzyme cofactor nicotinamide adenine dinucleotide (NAD+) to form the reduced product NADH (Chem. Commun. 2016, DOI: 10.1039/c6cc00575f). Meanwhile, a team led by Harry B. Gray and Jay R. Winkler at Caltech and James D. Blakemore at the University of Kansas discovered the Cp*H intermediate while investigating the coupling of protons to form H when treating Cp*Rh(bipyridine) with acid (Proc. Natl. Acad. Sci. USA 2016, DOI: 10.1073/pnas.1606018113). “These discoveries illustrate the versatility of mechanisms by which protons and hydrides can be delivered to and from metals,” comments Morris Bullock, director of the Center for Molecular Electrocatalysis at Pacific Northwest National Laboratory. “While these examples are for rhodium, the prevalence of cyclopentadienyl ligands in organometallic catalysts raises the possibility that similar reactivity could be widespread and involve other metals, and may be intentionally exploited in the design of new catalysts.”


News Article | January 6, 2016
Site: www.theenergycollective.com

Over the past year, the Department of Energy has been putting an increased focus on technology to integrate renewable energy into everyday grid operations. This work has included opening loan guarantees to distributed renewables, grants to support field tests of distributed energy integration, and future funding plans to support the development of grid modernization technologies. Now the DOE’s blue-sky research agency, ARPA-E, is getting into the action — or, as befits its mission, ahead of the action. Last month, ARPA-E announced $33 million in grants for its Network Optimized Distributed Energy Systems (NODES) program, meant to help 12 university, corporate and DOE laboratory projects that are trying to turn grid-edge assets into networked “virtual storage” systems. These projects are meant to “enable real-time coordination between distributed generation, such as rooftop and community solar assets, and bulk power generation, while proactively shaping electric load.” That could allow utilities to manage greater than 50 percent renewable penetration on the grid, “by developing transformational grid control methods that optimize use of flexible load and distributed energy resources.” This is not a unique concept. Distributed energy resource management software, or DERMS, platforms are being developed to tackle this challenge in one way or another, with grid giants like Siemens and Toshiba and startups such as Spirae, Enbala, Integral Analytics and Smarter Grid Solutions providing different pieces of the puzzle. Beyond that, there’s work being done by consortia such as Duke Energy’s Coalition of the Willing and Pacific Northwest National Laboratory’s transactive energy pilot project to allow lots of distributed energy assets to communicate and act in concert to solve local and system-wide grid challenges. Some of the projects funded by ARPA-E’s NODES program would help support these kinds of ongoing distributed energy resource (DER) integration efforts, while others would go several steps beyond what today’s utility grid control platforms and DERs are built to handle. Here’s a short description of each project and its aims. Greentech Media (GTM) produces industry-leading news, research, and conferences in the business-to-business greentech market. Our coverage areas include solar, smart grid, energy efficiency, wind, and other non-incumbent energy markets. For more information, visit: greentechmedia.com , follow us on twitter: @greentechmedia, or like us on Facebook: facebook.com/greentechmedia.


News Article | February 28, 2017
Site: www.accesswire.com

TORONTO, ON / ACCESSWIRE / February 28, 2017 / Golden Share (TSXV: GSH) is pleased to announce the successful trial production of the Licensed Vanadium Electrolyte (VE) which was developed by Pacific Northwest National Laboratory ("PNNL") of the United States Department of Energy. This VE product has advantages over previous generations, including a wider temperature-operating range and higher energy density. (Please refer the Press Release dated October 18, 2016 for details.) The trial production of this VE product represents a milestone in the development of Golden Share's strategic partnership with Northwest Mining & Exploration Group Co., Ltd. for Nonferrous Metals ("NWME"). NWME owns and operates the largest primary vanadium mine in China since initial production in 2011. (Please refer the Press Release dated May 24, 2016 for details.) "While announcing that the Licensed Vanadium Electrolyte is ready for commercial applications, we are excited for this recent development which is a major step closer to Golden Share's goal to become a preferred supplier of vanadium electrolyte. We are very proud of the scientists, technicians and all associates at NWME and appreciate their hard and efficient work." Nick Zeng, the President and CEO stated, "We would like to work with all Vanadium Redox Flow Battery (VRFB) manufacturers together to make VRFB to be the preferred solution for utility scale energy storage." A reliable and cost effective long hours energy storage solution is poised to play a pivotal role in the future developments of renewable energy industry. Golden Share believes VRFB is a better solution among the available energy storage technologies on the market. Golden Share will now proceed to select VRFB manufacturers as future partners to test the Licensed VE as part of pilot projects potentially developed by Golden Share. Interdisciplinary teams at Pacific Northwest National Laboratory (PNNL) address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. PNNL employs 4,400 staff, has an annual budget of nearly $1 billion, and has been managed for the U.S. Department of Energy by Ohio-based Battelle since the laboratory's inception in 1965. Northwest Mining & Exploration Group Co., Ltd. For Nonferrous Metals (NWME) is a large Chinese State-owned Enterprise. NWME has invested more than 20 mines involving gold, silver, copper, lead, zinc, vanadium, molybdenum and etc. through exploration and development in China and overseas. The Qianjiaping Vanadium Mine (Qianjiaping) of NWME is located in Shaanxi Province, China. Qianjiaping was put into production in 2011, is an environmental friendly vanadium mine with complete production, management and safety systems. Golden Share Mining Corporation is a Canadian junior mining company focusing on exploration in Ontario, the politically stable jurisdiction with a history of rich mineral endowment. Neither TSX Venture Exchange nor its Regulation Services Provider (as that term is defined in the policies of the TSX Venture Exchange) accepts responsibility for the adequacy or accuracy of this release. FOR MORE INFORMATION, CONSULT www.goldenshare.ca OR CONTACT:


Alan Brown is a writer and blogger for the Kavli Foundation. Read more perspective pieces on the Kavli Expert Voices landing page. Brown contributed this article to Live Science's Expert Voices: Op-Ed & Insights. Microbes could soon be at the top of the world's big-science list. Late last year, a consortium of scientists from 50 U.S. institutions proposed the "Unified Microbiome Initiative," a national effort to advance our understanding of microbiomes, communities of single-celled organisms such as bacteria, viruses and fungi. With a unified focus, researchers hope to learn how microbiomes could not only cure infectious diseases and reduce antibiotic drug resistance, but also reclaim exhausted farmland, cut fertilizer and pesticide use, and produce new fuels and carbon-based chemicals. Reaching those ambitious goals will require an equally ambitious effort to develop new tools and collaborations, building on breakthroughs in the analysis of microbial DNA, proteins and metabolites. Such analyses show that microbial communities can be incredibly diverse , including hundreds of thousands of different microbial species, all interacting with one another. In the human gut, those microbes aid digestion, but they may also impact obesity, allergies and even brain development. Beyond our bodies, microbes created the Earth's oxygen-rich atmosphere, and enable plant and ocean life to thrive. While today's tools can tell us a great deal about the molecules in microbial communities, they cannot explain the function of these molecules and how they enable microorganisms to work together. Only with that level of understanding, will scientists be able to harness microbiomes to improve human health and the environment. Recently, The Kavli Foundation hosted a Google+ Hangout about the potential of nature's microbiomes and how we can tap into it. The participants included: Janet Jansson is chief scientist of biology in the Earth and Biological Sciences Directorate at Pacific Northwest National Laboratory (PNNL) and sector lead for PNNL research in the Department of Energy's Biological Systems Science Division. She coordinates two of PNNL’s biology programs: the Microbiomes in Transition (MinT) initiative to study how climate and environmental changes impact natural and human microbiomes and the DOE Foundational Scientific Focus Area, Principles of Microbial Community Design. Rob Knight is the founder of the American Gut Project, an open-access project to survey the digestive system’s microbiome and its effect human health and development. He holds appointments at the University of California, San Diego School of Medicine and Department of Computer Science and Engineering, where he develops bioinformatics systems to classify and interpret large sets of biological data. Jeff F. Miller is director of the California NanoSystems Institute, a multidisciplinary research organization, and the corresponding author of the consortium’s Science paper. Based at University of California, Los Angeles, Miller holds the Fred Kavli Chair in NanoSystems Sciences and is a professor of Microbiology, Immunology & Molecular Genetics. Below is a modified transcript of the discussion. Edits and changes have been made by the participants to clarify spoken comments recorded during the live webcast. The Kavli Foundation: So, let's start with a question. There's been a Cambrian Explosion in microbiome research. Ten years ago, microbiomes were hardly on the map. Last year, 25,000 papers contained the term. Why is this happening now? Is it just because we can read microbial DNA, or are other technologies making this possible? Jeff Miller: There are a lot of factors that came together to cause this explosion of interest. One, certainly, is the ability to rapidly sequence DNA. And over the past 10 years or so, we've seen a progression of technologies that allow us to characterize microbial communities with increasing resolution and sophistication. But we've also encountered many bottlenecks along the way. And interpreting this massive amount of sequenced data is one of those bottlenecks. Rob Knight: I agree. I think it's really the combination of the DNA sequencing tools getting much cheaper, and the computational tools, including the toolkits that we developed, that make the information much more accessible to a broad community of users. I think what we will see in the future are tools that will go beyond taking inventories of species or inventories of genes and instead provide much more insight about how these species and genes function. But that is going to require a whole lot of additional development of both the software and the knowledge base to use that software. TKF: Janet, do you have any additional thoughts on that? Janet Jansson: Yes. With DNA sequencing we get information about the composition of microbiomes, but it's also interesting to know what those microbes are doing. For example, if we could understand their protein or metabolite composition, we could get a better understanding of what they're doing in different kinds of habitats and inside our bodies. There are a lot of developments in these areas, but those tools are still lagging behind the sequencing technologies. TKF: So, do we need a major program, a Unified Microbiome Initiative, to develop these capabilities? Couldn't we build on existing technologies or do we need to invent radically new types of science? Miller: The likely answer is, "both." There's certainly a lot of room for incremental advances leading to better sequencing technology and the like. But we also need some quantum leaps at the same time. The field has progressed rapidly. But we've reached a plateau that has to do with the limitations of the current technologies. We need to be able to see microbial communities where they live, in real time. We want to know what are they doing. What genes are they expressing? What proteins are they making? What metabolites are they synthesizing? How are they responding to each other and their environments? Then we need to be able to take all this data and interpret it in a way that allows us to ask questions and formulate new hypotheses that we can test and falsify or prove correct. These are really tall orders. They're going to require not only new technologies, but also the input of collaborators in engineering, physics, and chemistry, as well as the life sciences, environmental sciences, computer sciences, and more. TKF: I'm curious about the computer science side of it. Rob, you have a joint appointment UC San Diego's medical school and computer science department. Is it such a tall order? I mean, we have big data. Are we going to need something more? Knight: Well, the issue is that big data and magic are not quite the same thing. There's a lot of advances that need to happen on the algorithm side. In general, machine learning and generic algorithms will give you a good, but not ideal, answer to a particular scientific question. And the more information you can put in at the beginning to tailor those algorithms to your specific problem, the better you'll do. The other thing is that although we're producing a tremendous amount of data, we're still limited by the amount of data—it’s still not enough—and by our ability to interpret it. The problem a lot of people are facing right now is that they have collected so much microbial community information. They have over a thousand species that they don't understand. They are listing a million genes they don't understand. Then they are going onto measuring other types of molecules using metatranscriptomics or metaproteomics or metabolomics where, again, they create very large inventories that they also don't understand. But even with all that data, we're still limited by the number of samples, and by our ability to annotate and understand those entities. There's a huge role for both existing algorithms that can be applied more effectively as we get more data, and for fundamentally new algorithms as well as new ways of computing that radically change how we think about computation itself. TKF: Part of the challenge is that we need a better way to get closer to the inhabitants of metaphorical city I mentioned earlier. It is as if we are looking at that city from space and trying to figure out people's roles when we cannot even see these individuals, isn't it? Knight: It's a little worse than that. You're flying out there in your UFO, and you just take a big chunk of that city, grind it up, look at all the DNA and chemicals, and try to make sense of it. That can be an effective or ineffective way to understand the city. You will get an understanding of some of the chemical processes that are going on, and some of the genes that are expressed. But you're not going to learn a lot about the sociology or how those organisms communicate. Jansson: Yes, and another way to tackle that problem is to use simpler model communities. That way, if we don't have the instruments and data tools to deal with these highly complex communities, at least have a model community that will let us study specific interactions. TKF: In other words, it's easier to study something much simpler? Jansson: Yes, at least for now. Full communities are some of the most diverse types of habitats for microorganisms on earth. We go so much data, that we're not limited by the amount of data we produce, but by our ability to process the data. Even with supercomputers, it can take weeks, if not months, to just run all of that data through our computers. Knight: With all due respect, I think we're still data limited because we don't have enough samples. So, it's as if we had, say, five photos, and we're taking them at higher and higher resolution. That generates a lot of data, but not enough to create a movie. What we really need is, say, 100,000 frames. And no matter how much more information we get out of the smaller number of frames you have, we'll never be able to put that movie together. So, that's a lot of what we're facing. Right now, it's so expensive to process each sample, it's really difficult to get enough samples. This is really why we need to be able to read out microbes much, much faster, much, much cheaper. And we also need to use higher and higher resolution techniques, to get that full movie of how the interactions are taking place. Jansson: I agree we need more samples. But even then, it's very difficult to process the information from one sample. Miller: Right. In fact, we know the functions of only about half of the genes that we detect in these communities. And of the half we think we know, the amount of mis-annotation and improper context annotation, are also significant. So we're trying to put a puzzle together with only some of the pieces. And if you look at small molecules, this situation is even worse. About two percent of the metabolites that are found in the typical microbial community map to known structures. And only a fraction of those two percent are on known biochemical pathways. So we need more information. TKF: Those metabolites are involved in bacterial digestion. Are they how bacteria communicate with one another? Miller: Yes, that how they communicate, and how they acquire energy. They are the waste products they release, and the small molecules they use to compete with other microbes and interact with their environments. And many other things that have yet to be discovered. These small molecules are the language of microbial communities. TKF: Getting a handle on all this sounds like an imposing research project. But suppose you had these tools today. What is it that you'd like to study? Jeff, you study the evolution of bacteria that cause disease. What would you do with those tools? Miller: Boy, great question. I think one area that is prime for progress -- and some progress has been made already -- is the idea of taking a community that may be somewhat robust but not really optimal for its environment or host and engineering it so that it has more beneficial properties and fewer non-beneficial properties. Doing that really requires an understanding of the ecological principals that govern the community's composition, robustness, response to changes, etcetera. So, being able to reprogram microbial communities is really one of our ultimate goals. There are various steps along that pathway that one can imagine. But we're just at the very early stages of being able to do that. So if I were to choose one thing to study, it would be to understand how microbial communities are constructed well enough to enable predictive reliable, reengineering of those communities in order to optimize their functions. TKF: Very interesting. Janet, I know you collaborate on human microbiome work. But you've also developed a reputation for investigating how environmental changes affect microbiomes in the Alaskan permafrost and on the Gulf of Mexico. What types of things have you learned and what would new tools tell you that you don't already know? Jansson: For environmental studies, we want to understand how events, such as the Deepwater Horizon oil spill on the Gulf or the thawing of permafrost due to global warming in Alaska, is impacting microbes and the processes that they carry out in those systems. With the Gulf oil spill, we had organisms that were enriched during the spill, and that were able to degrade oil. So that was interesting, from that perspective. In the permafrost, we have a huge reserve of carbon that is currently trapped in that environment. So what happens to that carbon as the permafrost thaws and the microorganisms start to become active and degrade the carbon? Are they going to release a lot more carbon dioxide to the atmosphere and make the global warming process worse? At a very fundamental level, we need to understand what those microorganisms are doing. TKF: Very good. I'd like to move to some listener questions. You know, microbiomes are suddenly in the news, and several listeners want to know about products that promise to improve our health and cure certain conditions by altering our microbiomes. Rob, you've been studying the American gut for a while now. Do we know enough about microbiomes for anybody to make that kind of a claim? Knight: Yes, but so far, that's limited to just a very small number of people. For example, there was a really nice paper in Cell by Eran Segal and Eran Elinav of Israel's Weizmann Institute of Science. It showed that based on your microbiome, you can predict what foods will have good or bad impacts on your blood sugar. The drawback, so far, is that they can only do that in the Israeli population, where the food item inventory is somewhat different from what you would see in the United States, for example. But that technology is on the horizon and improving very rapidly. As far as probiotics go, there's not a lot of evidence that probiotics improve general health in humans, though there is some intriguing data in mice. On the other hand, there's a fair number of probiotics that have been clinically studied in well-conducted randomized controlled trials. For a number of conditions, like, irritable bowel syndrome, post-antibiotic diarrhea, and so forth, there are particular probiotics on the market that have been clinically validated. However, it's kind of like drugs, where certain probiotics are good for particular conditions, but not something that you should take generally. And in the same way that you would expect for drugs, most people don't need to take most probiotics most of the time, or at least not the ones that have been studied so far. So, I think it's fair to say that public enthusiasm is greatly outstripping the actual evidence. But there is some evidence underlying that enthusiasm. TKF: Jeff, what about the future? Are we going to be able to cure diseases? Will I be able to speed up my microbiome's metabolism so I can eat ice cream and never gain an ounce? Miller: When you look at the probiotics that are out there, they date way back. They have their origins in food production, fermentation, cheese making, and other processes. So the question is, do they have a health benefit or not? And the results are often equivocal. But that's very different than looking at what we know now, and asking, okay, how would you engineer or reengineer this system? Would a small consortia of bacteria be a good way to decrease fatty tissue and increase muscle mass with diet? So, as Rob said, we haven't yet gotten to the point where we have applied our modern understanding of microbiomes to probiotics now in the marketplace. But the potential for doing that is definitely there. So, to answer your question, it could cure infectious diseases. A great example is Clostridium difficile-induced diarrhea, which is caused by antibiotics. The best cure that we know is fecal microbiome transplantation from a healthy donor. It is about 90 percent effective, so we know it can work. It's very crude, and so the question now is how to make it better through more refined science, rather than hit-and-miss empirical testing. Knight: It's important to remember that this is not just for the future. There are people walking around, alive now, who would be dead had they not received fecal microbiome transplants. This is really a current technology that works and is being clinically applied now. And what we need to do is to refine it. But it's not something that's in the future, it's something that's here today. [Body Bugs: 5 Surprising Facts About Your Microbiome ] TKF: This opens up some very interesting questions. One of the things we've discovered about the human microbiome is that it influences all kinds of things, from brain development and obesity to behavior. These are the very things that define who we are. Now we're talking about possibly synthesizing artificial microbiomes. This raises some ethical issues, doesn't it? Miller: Definitely. Ethics is a huge, huge area. "Do no harm" is the first principal, whether we're talking about permafrost, agriculture or the human gastrointestinal tract. And so, the requirements for reengineering microbiota to use as a drug have got to be stringent and carefully controlled. And safety, obviously, is going to be the first issue. But it's complicated, because these are dynamic systems. And the question is, how long will any changes last? What else would change the result of making these perturbations, etcetera? So we need to understand a great deal more before we try to engineer and manipulate at a large scale. TKF: Janet, you study ecology. Could you imagine a large scale ecological intervention using microbiomes? Jansson: Before I address that, I just want to go back to our earlier discussion about probiotics. In addition to changing our microbiome, we can also influence it through the food we put into it. This is also a strategy that is sometimes successful, though not very well understood. Instead of a probiotic, it’s called a prebiotic. For example, you can eat what is called a resistant carbohydrate or starch, which is not easy to digest. So it makes it to your intestine relatively intact. This allows the microorganisms in your gut to consume and ferment it, and that's beneficial for the colonic health. As for actually manipulating an ecosystem on a large scale, this is, of course, difficult. There have been people who have talked about fertilizing the oceans by adding iron, to buffer or mitigate the impact of increasing CO2 concentrations. But when it comes to permafrost, how to prevent the degradation of the carbon that is trapped there? That is difficult. But by gaining knowledge about the types of organisms that are there and the ones that become active when the permafrost does begin to thaw, we can at least predict the implications of those changes. Knight: Just to build on what Janet said, it's important to remember that we've already radically reengineered, through agriculture, both soil and human microbiomes over most of the planet. We're brought them into states that have no precedent in nature. The issue is that we didn't understand at all what we were doing or what our impacts on those microbiomes were. So, it's not that we can't change them. We are already changing them. And have already changed them. The question's more, "Can we change them in a more nuanced and directed way, where we have a better understanding of the ways that we can change them, at the microbiome level as opposed to the industrial or occupational level?" TKF: We've talked about microbiomes impacting out development and behavior. These are the things that determine our personality. For a long time, researchers thought that our genetic makeup determined these things. Do we understand the interaction between microbiomes and genome? Janet, you're shaking your head, so why don't you start. Jansson: I can tell you that this is a real hot area of research right now. My group and several other groups are trying to establish the link between the host's genome and the microbiome. I can say that preliminary evidence – there have been a few publications mainly looking at mouse models – suggest that there is a link. Rob's taken a more historical perspective, looking at different types of human populations and the impact of ancestral lifestyles on microbiomes. Rob, maybe you want to comment on that? Knight: Yes. We know that both in mice and in humans, lifestyle behaviors, like diet and hygiene especially, have had a much larger impact than host genetics. This is true, even though host genetics still has a highly statistically significant impact on particular features of the microbiome, including, interestingly, features that are associated with obesity in humans. Miller: To add one thing to what Rob said, we've coevolved with our microbial communities since long before we became Homo sapiens. We have only about a dozen genes in our genome to digest complex carbohydrates. The microbiota in our gastrointestinal tract brings hundreds of genes that do that for us. And so, when we eat a healthy high fiber diet, what we're really doing is relying on these microbial consortia to digest that food for us, so that we can take some of the products and use them for energy and other purposes. TKF: So, as one listener asks, maybe it's not such a great idea to use bactericides to kill microbes on every surface in our homes? Miller: Not a good idea for a lot of reasons. Rob, you shook your head, so I'll let you start. Knight: Well, it's bad for so many reasons. Both in terms of increasing antimicrobial-resistant bacteria, because the bacteria that survive your attempts to kill them can then spread those resistant genes to other bacteria that infect us directly. And also because there's evidence, increasingly, that keeping your house too clean increases the risk of autoimmune diseases, especially in children. TKF: We're drawing to the end of our discussion, so I want to ask you a final question. You know, our understanding of the microbiome has changed dramatically over the past 10 or 15 years. Tell me, what has surprised you most about what you have discovered? Janet, why don't we start with you? Jansson: I think the thing that has surprised me the most is the importance of the microbiome with respect to our health, in so many different ways. This was something that was not known at all just a decade ago. And so that's what I'll say. Knight: Links between the microbiome and behavior. A decade ago we had hints that the microbiome was linked to health. But no one predicted, at all, that it would have a key role in behavior, especially in mammals. Miller: Diversity. Microbes – whether you're studying pathogens, beneficial microbes, or microbes in any context – are enormously diverse. The concept of a species has to be reconsidered when you're talking about microbes, because they're not only diverse, but constantly exchanging genetic information. They are truly a constantly moving target, and the extent of their functional diversity is mind-boggling. TKF: Excellent. This is certainly an exciting time for microbial research. And I didn't even get to ask the best question, which is, “How does the microbiome in our gut determine our behavior?” Knight: We don't know how it happens, and that's why we need a Unified Microbiome Initiative. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google+. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science . Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article | August 23, 2016
Site: www.greencarcongress.com

With funding from Bioenergy Technologies Office (BETO), Pacific Northwest National Laboratory (PNNL) has been working with industry-partner LanzaTech to convert alcohols derived from captured carbon monoxide, a byproduct in the production of steel, into synthetic paraffinic kerosene, a non-fossil-based jet fuel. The technology not only provides a viable source of sustainable jet fuel but also reduces the amount of greenhouse gasses emitted into the atmosphere. The team recently reached a significant milestone on the project, producing over five gallons of synthetic paraffinic kerosene in a lab environment. Five gallons is the quantity needed for “fit-for-purpose” testing. The accomplishment is part of the technology transfer process moving from bench top at PNNL to piloting at Freedom Pines, Georgia. With tight ASTM fuel specification requirements, both the demonstration of the process and the fit-for-purpose testing need to take place before the technology can be adopted commercially. The process works in two stages. First, LanzaTech captures waste carbon from refineries and manufacturing plants and feeds the CO-rich gas to microbes that consume the gas and produce ethanol. By itself, ethanol is unsuitable for jet engines because it contains oxygen and just doesn’t have the energy needed. After removing the oxygen, the resulting molecule, ethylene, is a gas and simply doesn’t have the properties to be used in modern turbofan engines. Consequently, during the second stage of the process, the ethanol is run through a PNNL-developed catalyst that converts ethanol to jet fuel by removing the oxygen and combining hydrocarbons, a process known as dehydration-oligomerization. However, ethanol dehydration-oligomerization technologies are typically difficult to control. To overcome the challenge, PNNL borrowed technology it developed to convert methanol to gasoline and created a new, specialized catalyst. The catalyst first removes water from the ethanol (dehydration), leaving behind ethylene. The small ethylene hydrocarbons are then combined (oligomerization) to form hydrocarbon chains large enough for jet fuel without forming aromatics that lead to sooting when burned. The fuel meets all the specifications required for use in commercial aviation—not an easy thing to do. Producing 5 gallons of jet fuel at PNNL’s Bioproducts, Sciences, and Engineering Laboratory demonstrates the catalyst is ready for the intermediate stage in testing. Researchers will now seek to produce 2,000 gallons of jet fuel. As PNNL successfully meets milestones, LanzaTech will scale up further. In 2011, PNNL, Lanzatech and Imperium jointly responded to a DOE call for proposals for a new aviation biofuel research project to convert biomass-derived alcohols to drop-in renewable jet fuel. Also in 2011, LanzaTech received a US$3-million contract from the United States Federal Aviation Administration (FAA), through the Department of Transportation’s John A. Volpe Center, to accelerate commercial availability of alcohol-to-jet (ATJ) renewable drop-in aviation fuel. (Earlier post.) Lanzatech has also been working with the UK’s largest bank, HSBC, and Virgin Atlantic on the development of the process that captures waste gases from industrial steel production and ferments them to ethanol, which is then chemically converted for use as jet fuel. (Earlier post.)


News Article | December 2, 2015
Site: phys.org

A group of 48 biologists and physical scientists from 50 institutions is aiming to change that. In the October 30 issue of the journal Science, they called for an ambitious research effort to understand and harness microbiomes. Such research could lead to advances in fields as diverse as medicine, child development, agricultural productivity, and climate modeling. Now, in a far-ranging roundtable discussion, three of the paper's co-authors explain to The Kavli Foundation why this is the time to launch a major national effort—the Unified Microbiome Initiative—to study the planet's least understood ecosystems. "In the past, we did not fully understand the complexity and richness of microbiomes, and we were limited because we could not grow the majority of bacteria in a lab, and so they were hard to study," explains Janet Jansson. Jansson is Chief Scientist of Biology in the Earth and Biological Sciences Directorate at Pacific Northwest National Laboratory (PNNL) and sector lead for PNNL research in the Department of Energy's Biological Systems Science Division. Over the past 15 years, however, the cost of genome sequencing has fallen by a factor of 1 million. This made it possible for the first time to survey the richness of microbial communities. The research has led to the discovery of hundreds of new and unexpected bacterial phyla, which are large groupings of related lifeforms. New genomic tools have vastly expanded our understanding of the role microbiomes play in our lives, Rob Knight explains. He is a founder of the American Gut Project and holds joint appointments at the University of California, San Diego, School of Medicine and Department of Computer Science and Engineering. "Remember, 10 years ago, microbes hadn't been linked to any of the things we now know they're involved in, such as obesity, allergies, depression and brain development. While the links between the microbiome and metabolism have certainly been very surprising, what surprised me the most has been the links between the microbiome and behavior. This was not even on the radar 10 years ago," Knight says. The Unified Microbiome Initiative calls for developing a new generation of scientific instruments that would let researchers study how microorganisms within a community interact with one another and their environment. This knowledge would make it possible to manipulate microbiomes to improve healthcare, agriculture and the environment. The impact of microbiome research on human health alone could be profound. Over the past 10 years, researchers have discovered that the composition of the gut microbiome - which contains 10 times more cells and 100 times more genetic information than the human body - can determine how medicines are metabolized. And physicians have demonstrated that transplanting healthy microbiomes into patients afflicted with colitis, an inflammation of the lining of the colon, caused by Clostridium difficile is three to four times more effective than antibiotics. "This is actually the first proof of principle that we can manipulate microbiomes in a very deliberate way to treat a serious human disease," says Jeff Miller, lead author of the Science paper. Miller is director of the California NanoSystems Institute, holds the Fred Kavli Chair in NanoSystems Sciences, and is a professor of Microbiology, Immunology & Molecular Genetics at University of California, Los Angeles. Microbiomes also play an important role in global ecosystems. Jansson, for example, studies how microbiomes behave as Arctic permafrost thaws. She hopes to learn how this will affect the metabolism of microbiomes that cycle carbon so scientists can model the impact on climate change. Microbiomes also play an important role in agriculture, where they provide plants with essential nutrients and even enzymes. However, the ability to engineer microbiomes also raises some flags. "Whenever we manipulate something in an animal or a human being, we need to consider the ethical issues. But the idea of potentially engineering Earth's microbial ecosystems raises very legitimate questions," Miller says. Explore further: Nanoscience provides insights into the world's smallest ecosystems More information: Read the full conversation with Janet Jansson, Rob Knight and Jeff Miller at: www.kavlifoundation.org/science-spotlights/why-its-time-map-microbiome


News Article | October 28, 2016
Site: www.sciencemag.org

When world leaders reached a deal last month in Kigali to curb the use of hydrofluorocarbons (HFCs)—planet-warming chemicals widely used in air conditioners and refrig­erators—many boasted the move would prevent nearly 0.5°C in warming by 2100. That is a big number, given that the Paris climate agreement aims to keep total warming to less than 2°C. If the HFC number is correct, it will make it easier for nations to achieve the Paris goal. But there’s a bit more scientific uncer­tainty surrounding that half-degree claim than the politicians let on. The figure has its origins in a 2006 dinner held by five scien­tists in a village in the Swiss Alps. The U.S. and European researchers, who work for gov­ernment and industry, were part of a group that advises policymakers on the Montreal Protocol, the 1987 pact that curbed the use of chemicals that harm the ozone layer. The re­searchers found that the protocol also helped reduce global warming, because some of the regulated chemicals were potent greenhouse gases. But they realized the pact had a warm­ing downside, too, says David Fahey, a physi­cist at the National Oceanic and Atmospheric Administration’s Earth System Research Lab­oratory in Boulder, Colorado. That’s because some of the newer, ozone-friendlier chemi­cals that the protocol thrust into use, such as HFCs, trap heat thousands of times more effectively than carbon dioxide. Soon, the researchers were trying to figure out what that meant for the planet. The half-degree estimate was first floated in a 2013 paper co-authored by one of the dinner guests, physicist Guus Velders of the National Institute for Public Health and the Environment in Bilthoven, the Netherlands. The study, published in , forecast that rising HFC use in the developing world would push global temperatures up by 0.35°C to 0.5°C by 2100. Those numbers caused a stir, because they were substantially higher than HFC warm­ing forecasts made by other climate models, including those underpinning the massive reports of the Intergovernmental Panel on Climate Change (IPCC). Ultimately, however, they helped galvanize support for the Ki­gali agreement, which aims to cut HFC use by 80% to 85% by 2047. And advocates and negotiators tended to cite the higher, 0.5°C estimate in their public remarks. That didn’t sit well with Andrew Jones, co-director of Climate Interactive, a promi­nent climate analysis group. A day after na­tions announced the 15 October Kigali deal, Jones wrote a blog post hailing it as “excel­lent news for the climate.” But he cautioned against counting on the full 0.5°C benefit. One reason, he wrote, is that he considers the 2013 paper to be an outlier, because it proj­ects HFC warming that is roughly four times greater than that projected by a model cited by IPCC. “I’m not really buying it,” says Jones, who is in Asheville, North Carolina. Velders says his team came up with higher warming estimates than IPCC because their model accounts for trends that others don’t, such as the faster-than expected adop­tion of HFCs driven by the Montreal Protocol, and an air-conditioning boom in the devel­oping world. Still, he concedes that forecast­ing HFC use is difficult. If warming prompts greater demand for air conditioners in India, for instance, future HFC impacts could be even greater. His team was careful to clarify the uncertainty, he notes, by presenting a range of forecasts, with 0.5°C at the high end. More sophisticated models that offer a range of possible futures, such as different patterns of economic growth, could improve such estimates, says climate modeling specialist Steven Smith of the Department of Energy’s Pacific Northwest National Laboratory, who is based in College Park, Maryland. “No slight to their work,” he says of the Velders group’s HFC projection. “They’ve clearly done the best work on this to date.” But although Velders and other scientists routinely acknowl­edge the uncertainty in their forecasts, “that’s not what poli­ticians do,” says Durwood Zaelke, president of the Institute for Governance & Sustain­able Development in Washington, D.C., which backed aggressive HFC reductions. For Velders, the Kigali agreement, and the role of the work done by his team, is a point of pride. “The half-degree estimate was the offshoot of a series of papers over a decade by the scientists, research that proved pivotal in the evolving understanding of the ties [Kigali agreement], of course it feels great that people are using your work,” he says. Now, Velders is offering a single new number: 0.06°C. That is his new estimate of how much warming HFCs will cause by 2100 if the Kigali deal hits its targets.


News Article | December 12, 2016
Site: www.eurekalert.org

Environmentally friendly material tweaked to soak up to 5 times its weight in oil, float for months in icy, rough waters SEQUIM, Wash. - Lowly sawdust, the sawmill waste that's sometimes tossed onto home garage floors to soak up oil spilled by amateur mechanics, could receive some new-found respect thanks to science. Researchers at the Department of Energy's Pacific Northwest National Laboratory have chemically modified sawdust to make it exceptionally oil-attracting and buoyant, characteristics that are ideal for cleaning oil spills in the icy, turbulent waters of the Arctic. The nontoxic material absorbs up to five times its weight in oil and stays afloat for at least four months. "Most of today's oil remediation materials are designed for warm water use," said PNNL microbiologist George Bonheyo, who leads the modified sawdust's development from PNNL's Marine Sciences Laboratory. "But as ice retreats in the Arctic Sea, fossil fuel developers are looking north, and we need new oil spill response methods that perform well in extreme conditions," added Bonheyo, who also holds a joint appointment in bioengineering with Washington State University. "The chance of an oil spill in the Arctic is real," said fellow PNNL microbiologist Robert Jeters, who is also part of the project. "We hope materials like our modified sawdust can help if an accident happens." Containing oil spills in cold waters is especially tricky, as bobbing ice chunks push oil below the water's surface, making it difficult to collect. The same goes for rough waters, whose tall, clashing waves disperse oil. The modified saw dust pulls double duty. Beyond absorbing oil, it also enhances another approach to combatting oil spills - controlled burns. If changing weather or tides move spilled oil toward a sensitive area fast, oil can be burned before it can cause further harm. Called in-situ burning, the practice can significantly reduce the amount of oil in water and minimize its adverse environmental effects. Bonheyo and his team looked to develop an environmentally friendly and inexpensive material that floats despite rough or freezing waters and can support in-situ burning. Not wanting to create more pollution if emergency responders can't retrieve oil cleanup material, Bonheyo's team considered other natural ingredients like rice hulls and silica. But they ultimately found their winner in a fine dust called wood flour. A woodworking byproduct, wood flour is often used to make wood composites. To make the dust into a thirsty oil mop, researchers chemically attach components of vegetable oil onto the material's surface. These attachments make the modified material oil-grabbing and water-shunning. The final product is a light, fluffy, bleached powder. The team is also trying out adding tiny, oil-eating microbes - fungi and bacteria - to the powder's surface so any left-behind material could naturally break down oil over time. Applying the modified sawdust is simple: sprinkle a thin layer over oil on the water's surface. The material immediately starts soaking up oil, creating a concentrated and solid slick that stays afloat thanks to the material's buoyant nature. The oil-soaked material can either be burned or retrieved. The team is using PNNL's unique Arctic simulation lab in Sequim, Washington to evaluate the material in icy waters. The facility is a customized shipping container that cools down to as low as 5 degrees Fahrenheit, which prompts researchers to don snowmobile suits and ski masks while they work. Ice slush forms on the surface of water that circulates inside a 290-gallon raceway pond placed inside the bitterly cold lab space. Oil is spilled on the slushy surface, followed by a sprinkle of modified sawdust. Tests have shown the material's water-repellent nature prevents ice from forming on it, allowing it to soak up oil and remain at the surface. Researchers are also testing how well the material performs in controlled burns. They conducted initial burns this fall at the U.S. Coast Guard and Naval Research Laboratory's Joint Maritime Test Facility near Mobile, Alabama. Burn tests continue today at PNNL's Marine Science Laboratory. Early results indicate a small amount of material enables burning of both thin and thick layers of spilled oil. In the coming months, PNNL will further evaluate the modified sawdust. The material will need additional testing and approval by multiple agencies before it can be used at actual oil spills. PNNL is developing the material for the Department of Interior's Bureau of Safety of Environmental Enforcement. BSEE is the lead federal agency charged with improving safety and ensuring environmental protection related to the offshore energy industry, primarily oil and natural gas on the U.S. Outer Continental Shelf. Interdisciplinary teams at Pacific Northwest National Laboratory address many of America's most pressing issues in energy, the environment and national security through advances in basic and applied science. Founded in 1965, PNNL employs 4,400 staff and has an annual budget of nearly $1 billion. It is managed by Battelle for the U.S. Department of Energy's Office of Science. As the single largest supporter of basic research in the physical sciences in the United States, the Office of Science is working to address some of the most pressing challenges of our time. For more information on PNNL, visit the PNNL News Center, or follow PNNL on Facebook, Google+, LinkedIn and Twitter.


News Article | April 28, 2016
Site: www.materialstoday.com

A new technique for making electrodes can produce hybrid batteries that charge faster and possess significantly better electrical capacity and long-term stability, say researchers from the US Department of Energy's Pacific Northwest National Laboratory (PNNL) and the University of Oregon. Known as ion soft-landing, the high-precision technique resulted in electrodes that could store a third more energy and had twice the lifespan compared to those prepared by a conventional method, the researchers report in Nature Communications. Straightforward to set up, the technique could eventually lead to cheaper, more powerful, longer-lasting rechargeable batteries. "This is the first time anyone has been able to put together a functioning battery using ion soft-landing," said PNNL chemist and laboratory fellow Julia Laskin. With ion soft-landing, the researchers were able to build an electrode surface very specifically, containing only the most desirable molecules out of a complex mixture of raw components. "It will help us unravel important scientific questions about this energy storage technology, a hybrid between common lithium rechargeable batteries and supercapacitors that have very high energy density," said lead author, PNNL chemist Venkateshkumar Prabhakaran. Although lithium-ion rechargeable batteries are the go-to technology for small electronic devices, they release their energy slowly. This is why hybrid electric vehicles use gasoline for accelerating and take a long time to recharge, making electric vehicles slower to ‘fill’ than their gas-powered cousins. One possible solution is a hybrid battery that combines a lithium battery's ability to hold a lot of charge for its size with a fast-charging supercapacitor. PNNL chemists wanted to know if they could make superior hybrid battery materials with ion soft-landing. To find out, Laskin and her colleagues created hybrid electrodes by spraying a chemical known as polyoxometalate (POM) onto supercapacitor electrodes made of carbon nanotubes. Off-the-shelf POM contains both positively- and negatively-charged ions, but only the negative ions are needed in hybrid electrodes. Limited by its design, the conventional preparation technique sprays both positive and negative ions onto the carbon nanotubes. Ion soft-landing, however, is able to separate the charged ions and only set down the negative ions on the electrode surface. What Laskin and her team wanted to find out was whether this ability would help in developing a better hybrid battery. To find out, the team made centimeter-sized square hybrid batteries comprising a specially-developed ionic liquid membrane sandwiched between POM-carbon nanotube electrodes. "We had to design a membrane that separated the electrodes and also served as the battery's electrolyte, which allows conduction of ions," said Prabhakaran. "Most people know electrolytes as the liquid sloshing around within a car battery. Ours was a solid gel." They made versions of this hybrid battery containing electrodes made with a conventional technique known as electrospray deposition and with ion soft-landing. They then tested how much energy these mini-hybrid batteries could hold and how many cycles of charging and discharging they could handle before petering out. The team found that the POM hybrid electrodes made with soft-landing had a superior energy storage capacity. They could hold a third more energy than the carbon nanotube supercapacitors by themselves, which were included as a minimum performance benchmark. And soft-landing hybrids could hold about 27% more energy than conventionally-made electrospray-deposited electrodes. To make sure they were using the optimal amount of POM, the researchers tested the energy storage capacity of hybrid electrodes containing different amounts. Soft-landing produced the highest capacity overall using the lowest amount of POM, indicating that the electrodes used the active material extremely efficiently. In comparison, conventional, sodium-based POM electrodes required twice as much POM material to reach their highest capacity. Although the conventionally-made devices used more POM, the team couldn't count them out yet, because they might have a longer lifespan than electrodes produced by soft-landing. To test that, the team charged and discharged the hybrids 1000 times and measured how long they lasted. As they did in the previous tests, the soft-landing-based devices performed the best, losing only a few percent capacity after 1000 cycles. The naked supercapacitors came in second, and the sodium-based, conventionally-made devices lost about double the capacity of the soft-landing devices. This suggests that the soft-landing method has the potential to double the lifespan of these types of hybrid batteries. The team was surprised that it took so little of the POM material to make such a big difference to the carbon nanotube supercapacitors. By weight, the amount of POM made up just one-fifth of a percent of the amount of carbon nanotube material. "The fact that the capacitance reaches a maximum with so little POM, and then drops off with more, is remarkable," said Laskin. "We didn't expect such a small amount of POM to be making such a large contribution to the capacitance." The researchers decided to examine the structure of the electrodes using powerful microscopes in the Environmental Molecular Sciences Laboratory (EMSL) at PNNL. They compared soft-landing with the conventionally-made, sodium-POM electrodes. This revealed that soft-landing created small discrete clusters of POM dotting the carbon nanotubes. In contrast, the conventional method resulted in larger clumps of POM clusters swamping out the nanotubes; these aggregates were up to 10 times the size of those made by soft-landing. This result suggested to the researchers that removing the positive ions from the POM starting material allowed the negative ions to disperse evenly over the surface. If they weren’t removed, the positive ions appeared to cause the POM and sodium to reform the crystalline material and aggregate on the surface. This prevented much of the POM from doing its job in the battery, thereby reducing capacity. When the team zoomed out a little and viewed the nanotubes from above, the conventionally-made electrodes were covered in large aggregates of POM. The soft-landed electrodes, however, were remarkably indistinguishable from the naked carbon nanotube supercapacitors. In future research, the team wants to explore how to get the carbon materials to accept more POM, which might increase capacity and lifespan even further. This story is adapted from material from the Pacific Northwest National Laboratory, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | December 27, 2016
Site: www.materialstoday.com

Researchers at the US Department of Energy (DOE)'s Pacific Northwest National Laboratory (PNNL) have chemically-modified sawdust to make it exceptionally oil-attracting and buoyant, characteristics that are ideal for cleaning up oil spills in the icy, turbulent waters of the Arctic. The nontoxic material can absorb up to five times its weight in oil and stays afloat for at least four months. "Most of today's oil remediation materials are designed for warm water use," said PNNL microbiologist George Bonheyo, who leads the modified sawdust's development from PNNL's Marine Sciences Laboratory and also holds a joint appointment in bioengineering at Washington State University. "But as ice retreats in the Arctic Sea, fossil fuel developers are looking north, and we need new oil spill response methods that perform well in extreme conditions." "The chance of an oil spill in the Arctic is real," said fellow PNNL microbiologist Robert Jeters, who is also taking part in the project. "We hope materials like our modified sawdust can help if an accident happens." Containing oil spills in cold waters is especially tricky, as bobbing ice chunks can push oil below the water's surface, making it difficult to collect. The same goes for rough waters, where tall, clashing waves disperse oil. The modified sawdust pulls double duty. As well as absorbing oil, it also enhances another approach to combating oil spills – controlled burns. If changing weather or tides move spilled oil towards a sensitive area fast, the oil can be burned before it causes further harm. Called in-situ burning, the practice can significantly reduce the amount of oil in water and minimize its adverse environmental effects. Bonheyo and his team wanted to develop an environmentally-friendly and inexpensive material that can float in rough or freezing waters and support in-situ burning. Not wanting to create more pollution if emergency responders can't retrieve the material from the water, Bonheyo's team considered other natural ingredients like rice hulls and silica. But the best candidate turned out to be a fine dust called wood flour; a woodworking by-product, wood flour is often used to make wood composites. To make the dust into a thirsty oil mop, the researchers chemically attach components of vegetable oil onto the material's surface. These attachments make the modified material oil-grabbing and water-shunning, producing a light, fluffy, bleached powder. The team is also experimenting with adding tiny, oil-eating microbes – fungi and bacteria – to the powder's surface, so that any left-behind material will naturally break down the remaining oil over time. Applying the modified sawdust is simple: sprinkle a thin layer over oil on the water's surface. The material immediately starts soaking up the oil, creating a concentrated and solid slick that stays afloat thanks to the material's natural buoyancy. This oil-soaked material can then either be burned or retrieved. The team is using PNNL's unique Arctic simulation lab in Sequim, Washington, to evaluate the material in icy waters. The facility is a customized shipping container that can be cooled to as low as 5°F, prompting researchers to don snowmobile suits and ski masks while they work. These low temperatures cause ice slush to form on the surface of water circulating inside a 290-gallon raceway pond within the container. Oil is spilled on this slushy surface, followed by a sprinkle of modified sawdust. Tests have shown the material's water-repellent nature prevents ice from forming on it, allowing it to soak up oil and remain at the surface. The researchers are also testing how well the material performs in controlled burns. They recently conducted initial burns at the US Coast Guard and Naval Research Laboratory's Joint Maritime Test Facility near Mobile, Alabama, and burn tests continue today at PNNL's Marine Science Laboratory. Early results indicate that a small amount of material allows the burning of both thin and thick layers of spilled oil. In the coming months, PNNL will further evaluate the modified sawdust, which will need additional testing and approval by multiple agencies before it can be used at actual oil spills. PNNL is developing the material for the US Department of Interior's Bureau of Safety of Environmental Enforcement (BSEE). This is the lead federal agency charged with improving safety and ensuring environmental protection related to the offshore energy industry, primarily oil and natural gas on the US Outer Continental Shelf. This story is adapted from material from PNNL, with editorial changes made by Materials Today. The views expressed in this article do not necessarily represent those of Elsevier. Link to original source.


News Article | October 26, 2016
Site: www.eurekalert.org

For the first time scientists can see how the shells of tiny marine organisms grow atom-by-atom, a new study reports. The advance provides new insights into the mechanisms of biomineralization and will improve our understanding of environmental change in Earth's past. Led by researchers from the University of California, Davis and the University of Washington, with key support from the U.S. Department of Energy's Pacific Northwest National Laboratory, the team examined an organic-mineral interface where the first calcium carbonate crystals start to appear in the shells of foraminifera, a type of plankton. "We've gotten the first glimpse of the biological event horizon," said Howard Spero, a study co-author and UC Davis geochemistry professor. The findings were published Monday, Oct. 24, in the Proceedings of the National Academy of Sciences. The researchers zoomed into shells at the atomic level to better understand how growth processes may influence the levels of trace impurities in shells. The team looked at a key stage -- the interaction between the biological 'template' and the initiation of shell growth. The scientists produced an atom-scale map of the chemistry at this crucial interface in the foraminifera Orbulina universa. This is the first-ever measurement of the chemistry of a calcium carbonate biomineralization template, Spero said. Among the new findings are elevated levels of sodium and magnesium in the organic layer. This is surprising because the two elements are not considered important architects in building shells, said lead study author Oscar Branson, a former postdoctoral researcher at UC Davis who is now at the Australian National University in Canberra. Also, the greater concentrations of magnesium and sodium in the organic template may need to be considered when investigating past climate with foraminifera shells. Most of what we know about past climate (beyond ice core records) comes from chemical analyses of shells made by the tiny, one-celled creatures called foraminifera, or "forams." When forams die, their shells sink and are preserved in seafloor mud. The chemistry preserved in ancient shells chronicles climate change on Earth, an archive that stretches back nearly 200 million years. The calcium carbonate shells incorporate elements from seawater -- such as calcium, magnesium and sodium -- as the shells grow. The amount of trace impurities in a shell depends on both the surrounding environmental conditions and how the shells are made. For example, the more magnesium a shell has, the warmer the ocean was where that shell grew. "Finding out how much magnesium there is in a shell can allow us to find out the temperature of seawater going back up to 150 million years," Branson said. But magnesium levels also vary within a shell, because of nanometer-scale growth bands. Each band is one day's growth (similar to the seasonal variations in tree rings). Branson said considerable gaps persist in understanding what exactly causes the daily bands in the shells. "We know that shell formation processes are important for shell chemistry, but we don't know much about these processes or how they might have changed through time," he said. "This adds considerable uncertainty to climate reconstructions." The researchers used two cutting-edge techniques: Time-of-Flight Secondary Ionization Mass Spectrometry (ToF-SIMS) and Laser-Assisted Atom Probe Tomography (APT). ToF-SIMS is a two-dimensional chemical mapping technique which shows the elemental composition of the surface of a polished sample. The technique was developed for the elemental analysis of complex polymer materials, and is just starting to be applied to natural samples like shells. APT is an atomic-scale three-dimensional mapping technique, developed for looking at internal structures in advanced alloys, silicon chips and superconductors. The APT imaging was performed at the Environmental Molecular Sciences Laboratory, a U.S. Department of Energy Office of Science User Facility at the Pacific Northwest National Laboratory. Other co-authors are Alex Gagnon and Elisa Bonnin at the University of Washington; Ann Russell and Jennifer Fehrenbacher at the University of California, Davis; Daniel Perea, Zihua Zhu and Maria Winters at the Pacific Northwest National Laboratory; and B?rbel Hönisch at Columbia University. The research was funded by the National Science Foundation and the Department of Energy. Oscar Branson, (currently at Australian National University, Canberra), branson@ucdavis.edu Howard Spero, UC Davis, hjspero@ucdavis.edu, 530-752-3307 Alex Gagnon, University of Washington, gagnon@uw.edu, 206-543-5627


News Article | November 19, 2016
Site: www.sciencedaily.com

In November, the Paris Climate Agreement goes into effect to reduce global carbon emissions. To achieve the set targets, experts say capturing and storing carbon must be part of the solution. Several projects throughout the world are trying to make that happen. Now, a study on one of those endeavors, reported in the ACS journal Environmental Science & Technology Letters, has found that within two years, carbon dioxide (CO ) injected into basalt transformed into solid rock. Lab studies on basalt have shown that the rock, which formed from lava millions of years ago and is found throughout the world, can rapidly convert CO into stable carbonate minerals. This evidence suggests that if CO could be locked into this solid form, it would be stowed away for good, unable to escape into the atmosphere. But what happens in the lab doesn't always reflect what happens in the field. One field project in Iceland injected CO pre-dissolved in water into a basalt formation, where it was successfully stored. And starting in 2009, researchers with Pacific Northwest National Laboratory and the Montana-based Big Sky Carbon Sequestration Partnership undertook a pilot project in eastern Washington to inject 1,000 tons of pressurized liquid CO into a basalt formation. After drilling a well in the Columbia River Basalt formation and testing its properties, the team injected CO into it in 2013. Core samples were extracted from the well two years later, and Pete McGrail and colleagues confirmed that the CO had indeed converted into the carbonate mineral ankerite, as the lab experiments had predicted. And because basalts are widely found in North America and throughout the world, the researchers suggest that the formations could help permanently sequester carbon on a large scale. The authors acknowledge funding from the U.S. Department of Energy; the National Energy Technology Laboratory; the Big Sky Carbon Sequestration Partnership; Shell Exploration & Production Company; Portland General Electric; and Schlumberger Inc.


News Article | November 18, 2016
Site: www.eurekalert.org

In November, the Paris Climate Agreement goes into effect to reduce global carbon emissions. To achieve the set targets, experts say capturing and storing carbon must be part of the solution. Several projects throughout the world are trying to make that happen. Now, a study on one of those endeavors, reported in the ACS journal Environmental Science & Technology Letters, has found that within two years, carbon dioxide (CO2) injected into basalt transformed into solid rock. Lab studies on basalt have shown that the rock, which formed from lava millions of years ago and is found throughout the world, can rapidly convert CO2 into stable carbonate minerals. This evidence suggests that if CO2 could be locked into this solid form, it would be stowed away for good, unable to escape into the atmosphere. But what happens in the lab doesn't always reflect what happens in the field. One field project in Iceland injected CO2 pre-dissolved in water into a basalt formation, where it was successfully stored. And starting in 2009, researchers with Pacific Northwest National Laboratory and the Montana-based Big Sky Carbon Sequestration Partnership undertook a pilot project in eastern Washington to inject 1,000 tons of pressurized liquid CO2 into a basalt formation. After drilling a well in the Columbia River Basalt formation and testing its properties, the team injected CO2 into it in 2013. Core samples were extracted from the well two years later, and Pete McGrail and colleagues confirmed that the CO2 had indeed converted into the carbonate mineral ankerite, as the lab experiments had predicted. And because basalts are widely found in North America and throughout the world, the researchers suggest that the formations could help permanently sequester carbon on a large scale. The authors acknowledge funding from the U.S. Department of Energy; the National Energy Technology Laboratory; the Big Sky Carbon Sequestration Partnership; Shell Exploration & Production Company; Portland General Electric; and Schlumberger Inc. The paper's abstract will be available on Nov. 18 here: http://pubs. The American Chemical Society is a nonprofit organization chartered by the U.S. Congress. With nearly 157,000 members, ACS is the world's largest scientific society and a global leader in providing access to chemistry-related research through its multiple databases, peer-reviewed journals and scientific conferences. Its main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact newsroom@acs.org.


News Article | December 13, 2016
Site: www.eurekalert.org

TALLAHASSEE, Fla. -- Deep stores of carbon in northern peatlands may be safe from rising temperatures, according to a team of researchers from several U.S.-based institutions. And that is good news for now, the researchers said. Florida State University research scientist Rachel Wilson and University of Oregon graduate student Anya Hopple are the first authors on a new study published today in Nature Communications. The study details experiments suggesting that carbon stored in peat -- a highly organic material found in marsh or damp regions -- may not succumb to the Earth's warming as easily as scientists thought. That means if these northern peatlands -- found in the upper half of the northern hemisphere -- remain flooded, a substantial amount of carbon will not be released into the atmosphere. "We do see some breakdown of peat on the surface, but not below 2 feet deep, where the bulk of the carbon is stored," Wilson said. The study is part of a long-term look at how carbon stored in peat will respond to climate and environmental change. The team of researchers, led by Paul Hanson of the Oak Ridge National Laboratory, includes scientists from FSU, University of Oregon, Georgia Institute of Technology, the U.S. Department of Agriculture-Forest Service, Chapman University, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory and Oak Ridge National Laboratory. Researchers ran four different temperature simulations -- increasing the temperature of the peat by 2.25 degrees Celsius, 4.5 degrees Celsius, 6.25 degrees Celsius and 9 degrees Celsius -- to see how it would respond to increased heat. They found that the surface peat did emit more methane gas when warmed, but the deep peat did not break down and did not start emitting additional methane or carbon dioxide. "If the release of greenhouse gases is not enhanced by temperature of the deep peat, that's great news because that means that if all other things remain as they are, the deep peat carbon remains in the soil," said Joel Kostka, professor of microbiology at Georgia Institute of Technology. The Earth's soils contain 1,550 billion tons of organic carbon, and 500 billion tons of this carbon is stored in northern peatlands around the world. This quantity is roughly the same amount as carbon in the atmosphere. Scientists have been anxious to learn how these northern peatlands will respond to warming because a tremendous amount of carbon could be released into the atmosphere. Researchers worked at the Oak Ridge National Laboratory's experimental site known as SPRUCE in northern Minnesota to examine both surface peat and peat up to 6 feet deep. The majority of the carbon is stored deeper in the ground. Large environmental chambers were constructed by the Oak Ridge team to enclose portions of the peatlands. Within these chambers, scientists simulated climate change effects such as higher temperatures and elevated carbon dioxide levels. They also took some of the deep peat back to their labs to heat in additional studies. While scientists said they were surprised by the results, they also cautioned that this came only after one year of warming. "There are the necessary caveats that this was only for one year, and the experiment is planned to run for a decade, and other ecosystem feedbacks may become important in greenhouse gas emissions," said Scott Bridgham, director of the Institute of Ecology and Evolution at University of Oregon and Hopple's adviser. In the future, scientists also plan to look at how these peatlands respond to heightened carbon dioxide levels combined with the temperature increases. "In the future, we'll be warmer, but we'll also have more carbon dioxide in the atmosphere, so we need to understand how these deep stores of peat, which have all this carbon, respond to these conditions," said Jeff Chanton, professor of oceanography at Florida State University. This work was funded by the U.S. Department of Energy.


Dang L.X.,Pacific Northwest National Laboratory | Wick C.D.,Louisiana Tech University
Journal of Physical Chemistry B | Year: 2011

Molecular dynamics simulations with many-body interactions were carried out to systematically study the effect of anion type, tetrafluoroborate [BF 4] or hexafluorophosphate [PF6], paired with the cation 1-butyl-3-methylimidazolium [bmim], on the interfacial absorption of gases in room temperature ionic liquids (RTILs). The potentials of mean force (PMF) of CO2 and H2O at 350 K were calculated across the air-liquid interfaces of [bmim][BF4] and [bmim][PF6]. We found that the PMFs for H2O exhibited no interfacial minima at both interfaces, while the corresponding PMFs for CO2 had significant free energy minima there. However, the PMFs for H2O showed a much higher interfacial free energy than in the bulk for [bmim][BF4], but only a slightly higher interfacial free energy for [bmim][PF6] than in bulk. The reason for this was due to the more hydrophilic nature of the [BF 4] anion, and the fact that [BF4] was found to have little propensity for the interface. Our results show that H2O is much more likely to be found at the air-[bmim][PF6] interface than at the air-[bmim][BF4] interface. The free energies of solvation were found to be more negative for [bmim][BF4] than [bmim][PF6] for water and similar for CO2. This observation is consistent with experimental Henry's law coefficients. Our results show that anion type, in addition to affecting the free energy of solvation into RTILs, should also significantly influence the uptake mechanism. © 2011 American Chemical Society.


Neale R.B.,U.S. National Center for Atmospheric Research | Richter J.,U.S. National Center for Atmospheric Research | Park S.,U.S. National Center for Atmospheric Research | Lauritzen P.H.,U.S. National Center for Atmospheric Research | And 3 more authors.
Journal of Climate | Year: 2013

The Community Atmosphere Model, version 4 (CAM4), was released as part of the Community Climate System Model, version 4 (CCSM4). The finite volume (FV) dynamical core is now the default because of its superior transport and conservation properties. Deep convection parameterization changes include a dilute plume calculation of convective available potential energy (CAPE) and the introduction of convective momentum transport (CMT). An additional cloud fraction calculation is now performed following macrophysical state updates to provide improved thermodynamic consistency. A freeze-drying modification is further made to the cloud fraction calculation in very dry environments (e.g., the Arctic), where cloud fraction and cloud water values were often inconsistent in CAM3. In CAM4 the FV dynamical core further degrades the excessive trade-wind simulation, but reduces zonal stress errors at higher latitudes. Plume dilution alleviates much of the midtropospheric tropical dry biases and reduces the persistent monsoon precipitation biases over the Arabian Peninsula and the southern Indian Ocean. CMT reduces much of the excessive tradewind biases in eastern ocean basins. CAM4 shows a global reduction in cloud fraction compared to CAM3, primarily as a result of the freeze-drying and improved cloud fraction equilibrium modifications. Regional climate feature improvements include the propagation of stationary waves from the Pacific into midlatitudes and the seasonal frequency of Northern Hemisphere blocking events. A 18 versus 28 horizontal resolution of the FV dynamical core exhibits superior improvements in regional climate features of precipitation and surface stress. Improvements in the fully coupled mean climate between CAM3 and CAM4 are also more substantial than in forced sea surface temperature (SST) simulations.©2013 American Meteorological Society.


Yoneyama K.,Japan Agency for Marine - Earth Science and Technology | Zhang C.,University of Miami | Long C.N.,Pacific Northwest National Laboratory
Bulletin of the American Meteorological Society | Year: 2013

A field campaign in the Indian Ocean region collected unprecedented observations during October 2011'March 2012 to help advance knowledge of physical processes of the Madden-Julian oscillation (MJO). Studies on the MJO mean seasonal cycle indicated that main MJO initiation activity takes place in the central IO from October to March, with the highest occurrence probability near the equator in October-January. The special observing period (SOP) was designed to obtain high-resolution data to capture the diurnal cycle of convective activity with the maximum observing capacity. All other instruments continued to operate after the SOP until the end of the intensive observing period (IOP). Accompanying the sounding arrays and equally essential to the field campaign was a radar network. The 2011-12 MJO field campaign provided observations that are unique in several aspects in comparison to previous tropical field campaigns that aimed at interactions between atmospheric convection and its large-scale environment and between the atmosphere and ocean.


Mather J.H.,Pacific Northwest National Laboratory | Voyles J.W.,Pacific Northwest National Laboratory
Bulletin of the American Meteorological Society | Year: 2013

The Department of Energy (DOE) created the Atmospheric Radiation Measurement (ARM) has added instruments to expand its capabilities for the study of clouds, aerosol, and precipitation. The ARM sites provide a broad array of measurements covering cloud properties aerosol properties, surface properties, atmospheric thermodynamic properties, and radiative fluxes. These measurements are provided continuously with high temporal resolution despite the resolution of some of the remote sensing instruments being short to capture variability in cloud properties. The ARM Facility is managed and operated by nine DOE National Laboratories, located at Argonne, Berkeley, Brookhaven, Livermore, Los Alamos, Oak Ridge, National Renewable Energy, Pacific Northwest, and Sandia. It has developed and deployed its first ARM Mobile Facility (AMF) to improve its performance.


Murray C.J.,Pacific Northwest National Laboratory
Journal of contaminant hydrology | Year: 2013

A large set of sediment samples from a 1600 m2 experimental plot within a 2.2 km2 vadose zone and groundwater uranium (VI) plume was subject to physical, chemical, and mineralogic characterization. The plot is being used for field experimentation on U(VI) recharge and transport processes within a persistent groundwater plume that exists in the groundwater-river interaction zone of the Columbia River at the U.S. DOE Hanford site. The samples were obtained during the installation of 35 tightly spaced (10 m separation) groundwater monitoring wells. The characterization measurements for each sample included total contaminant concentrations (U and Cu primarily), bicarbonate extractable U(VI), sequential 238U(VI) contaminant desorption Kd, 233U(VI) adsorption K(d), grain size distribution, surface area, extractable poorly crystalline Fe(III) oxides, and mineralogy. The characterization objective was to inform a conceptual model of coupled processes controlling the anomalous longevity of the plume, and to quantify the spatial heterogeneity of the contaminant inventory and the primary properties effecting reactive transport. Correlations were drawn between chemical, physical, and reaction properties, and Gaussian simulation was used to compute multiple 3-D realizations of extractable U(VI), the 233U(VI) adsorption K(d), and the distribution of the reactive <2 mm fraction. Adsorbed contaminant U(VI) was highest in the vadose zone and the zone of seasonal water table fluctuation lying at its base. Adsorbed U(VI) was measureable, but low, in the groundwater plume region where very high hydraulic conductivities existed. The distribution of adsorbed U(VI) displayed no apparent correlation with sediment physical or chemical properties. Desorption [238U(IV)] and adsorption [233U(VI)] K(d) values showed appreciable differences due to mass transfer controlled surface complexation and the effects of long subsurface residence times. The 233U(VI) adsorption K(d), a combined measure of surface complexation strength and site concentration, was relatively uniform throughout the domain, displaying correlation with fines distribution and surface area. The characterization results revealed U(VI) supplied to the groundwater plume through spatially heterogeneous recharge from residual contamination in the zone of seasonal water table fluctuation, and transport of U(VI) controlled by weak, kinetically-controlled surface complexation in the coarse-textured saturated zone. Geostatistical relationships for the adsorbed contaminant U distribution in the characterization domain allow an extrapolation to inventory at the plume scale, a critical unknown for remedial action. Copyright © 2012 Elsevier B.V. All rights reserved.


Kwak J.H.,Pacific Northwest National Laboratory | Kovarik L.,Pacific Northwest National Laboratory | Szanyi J.,Pacific Northwest National Laboratory
ACS Catalysis | Year: 2013

The catalytic performance of a series of Ru/Al2O3 catalysts with Ru content in the 0.1-5% range was examined in the reduction of CO2 with H2. At low Ru loadings (≤0.5%) where the active metal phase is highly dispersed (mostly atomically) on the alumina support, CO is formed with high selectivity. With increasing metal loading, the selectivity toward CH4 formation increases, while that for CO production decreases. In the 0.1% Ru/Al2O3 catalyst, Ru is mostly present in atomic dispersion, as scanning transmission electron microscopy (STEM) images obtained from the fresh sample prior to catalytic testing reveal. STEM images recorded from this same sample, following the temperature programmed reaction test, clearly show the agglomeration of small metal particles (and atoms) into 3D clusters. The clustering of the highly dispersed metal phase is responsible for the observed dramatic selectivity change during elevated temperature tests: dramatic decrease in CO and large increase in CH4 selectivity. Apparent activation energies, estimated from the slopes of Arrhenius plots, of 82 and 62 kJ/mol for CO and CH 4 formation were determined, respectively, regardless of Ru loading. These results suggest that the formation of CO and CH4 follow different reaction pathways or proceed on active centers of a different nature. Reactions with CO2/H2 and CO/H2 mixtures (under otherwise identical reaction conditions) reveal that the onset temperature of CO2 reduction is about 150 C lower than of CO reduction. © 2013 American Chemical Society.


Isaac G.,Pacific Northwest National Laboratory
Methods in molecular biology (Clifton, N.J.) | Year: 2011

In the past decade, many new strategies for mass spectrometry (MS)-based analyses of lipids have been developed. Lipidomics is one of the most promising research fields to emerge as a result of these advances in MS. Currently, mass spectrometric analysis of lipids involves two complementary approaches: direct infusion (shotgun lipidomics) and liquid chromatography coupled to MS. In this chapter, I will demonstrate the approach of shotgun lipidomics using electrospray ionization tandem MS for the analysis of lipid molecular species directly from crude biological extracts of tissue or fluids.


Zhang X.,Pacific Northwest National Laboratory | Srinivasan R.,Texas A&M University
Environmental Modelling and Software | Year: 2010

Precipitation is one important input variable for land surface hydrologic and ecological models. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the conterminous United States at high resolution (approximately 4 km × 4 km). There are two major issues concerning the application of NEXRAD data: 1) the lack of a NEXRAD geo-processing and geo-referring program and 2) bias correction of NEXRAD estimates. However, in public domain, there is no Geographic Information System (GIS) software that can use geostatistical approaches to calibrate NEXRAD data using raingauge data, and automatically process NEXRAD data for hydrologic and ecological models. In this study, we developed new GIS software for NEXRAD validation and calibration (NEXRAD-VC) using raingauge data. NEXRAD-VC can automatically read in NEXRAD data in NetCDF or XMRG format, transform projection of NEXRAD data to match with raingauge data, apply different geostatistical approaches to calibrate NEXRAD data using raingauge data, evaluate performance of different calibration methods using leave-one-out cross-validation scheme, output spatial precipitation maps in ArcGIS grid format, calculate spatial average precipitation for each spatial modeling unit used by hydrologic and ecological models. The major functions of NEXRAD-VC are illustrated in the Little River Experimental Watershed (LREW) in Georgia using daily precipitation records of fifteen raingauges and NEXRAD products of five years. The validation results show that NEXRAD has a high success rate for detecting rain and no-rain events: 82.8% and 95.6%, respectively. NEXRAD estimates have high correlation with raingauge observations (correlation coefficient of 0.91), but relatively larger relative mean absolute error value of 36%. It is also worth noting that the performance of NEXRAD increases with the decreasing of rainfall variability. Three methods (Bias Adjustment method (BA), Regressing Kriging (RK), and Simple Kriging with varying local means (SKlm)) were employed to calibrate NEXRAD using raingauge data. Overall, SKlm performed the best among these methods. Compared with NEXRAD, SKlm improved the correlation coefficient to 0.96 and the relative mean absolute error to 22.8%, respectively. SKlm also increased the success rate of detection of rain and no-rain events to 87.47% and 96.05%, respectively. Further analysis of the performance of the three calibration methods and NEXRAD for daily spatial precipitation estimation shows that no one method can consistently provide better results than the other methods for each evaluation coefficient for each day. It is suggested that multiple methods be implemented to predict spatial precipitation. The NEXRAD-VC developed in this study can serve as an effective and efficient tool to batch process large amounts of NEXRAD data for hydrologic and ecological modeling. © 2010 Elsevier Ltd.


Xie S.,Lawrence Livermore National Laboratory | Liu X.,Pacific Northwest National Laboratory | Zhao C.,Lawrence Livermore National Laboratory | Zhang Y.,Lawrence Livermore National Laboratory
Journal of Climate | Year: 2013

Sensitivity of Arctic clouds and radiation in the Community Atmospheric Model, version 5, to the ice nucleation process is examined by testing a new physically based ice nucleation scheme that links the variation of ice nuclei (IN) number concentration to aerosol properties. The default scheme parameterizes the IN concentration simply as a function of ice supersaturation. The new scheme leads to a significant reduction in simulated IN concentration at all latitudes while changes in cloud amounts and properties are mainly seen at high- and midlatitude storm tracks. In the Arctic, there is a considerable increase in midlevel clouds and a decrease in low-level clouds, which result from the complex interaction among the cloud macrophysics, microphysics, and large-scale environment. The smaller IN concentrations result in an increase in liquid water path and a decrease in ice water path caused by the slowdown of the Bergeron-Findeisen process in mixedphase clouds. Overall, there is an increase in the optical depth of Arctic clouds, which leads to a stronger cloud radiative forcing (net cooling) at the top of the atmosphere. The comparison with satellite data shows that the new scheme slightly improves low-level cloud simulations over most of the Arctic but produces too many midlevel clouds. Considerable improvements are seen in the simulated low-level clouds and their properties when compared with Arctic ground-based measurements. Issues with the observations and the model- observation comparison in the Arctic region are discussed. © 2013 American Meteorological Society.


Wu S.,Pacific Northwest National Laboratory
Methods in molecular biology (Clifton, N.J.) | Year: 2011

Bottom-up and top-down strategies are two commonly used methods for mass spectrometry (MS) based protein identification; each method has its own advantages and disadvantages. In this chapter, we describe an integrated top-down and bottom-up approach facilitated by concurrent liquid chromatography-mass spectrometry (LC-MS) analysis and fraction collection for comprehensive high-throughput intact protein profiling. The approach employs a high resolution reversed phase (RP) LC separation coupled with LC eluent fraction collection and concurrent on-line MS with a high field (12 T) Fourier-transform ion cyclotron resonance (FTICR) mass spectrometer. Protein elusion profiles and tentative modified protein identification are made using detected intact protein mass in conjunction with bottom-up protein identifications from the enzymatic digestion and analysis of corresponding LC fractions. Specific proteins of biological interest are incorporated into a target ion list for subsequent off-line gas-phase fragmentation that uses an aliquot of the original collected LC fraction, an aliquot of which was also used for bottom-up analysis.


Luedtke A.T.,Pacific Northwest National Laboratory | Autrey T.,Pacific Northwest National Laboratory
Inorganic Chemistry | Year: 2010

A series of metal amidoboranes LiNH2BH3 (LAB), NaNH2BH3 (SAB), LiNH(Me)BH3 (LMAB), NaNH(Me)BH3 (SMAB), KNH(Me)BH3 (PMAB), and KNH( tBu)BH3 (PBAB) were synthesized by solution phase methods, and the thermal release of H2 in the solid state was studied. The mechanism of hydrogen release from metal amidoboranes, a metal ion assisted hydride transfer, is very different than the mechanism of hydrogen release from the parent compound ammonia borane (AB). On the basis of the observed trends in reaction rates of H > Me > tBu, K > Na > Li, and the kinetic isotope effect, the mechanism of hydrogen release from MAB compounds was found to proceed through a bimolecular mechanism involving the intermediacy of a MH (M = Li, Na, or K). The non-volatile products formed from MABs are significantly different than the products formed after hydrogen release from AB. The boron containing products resulting from the release of 1 equiv of hydrogen from the metal amidoboranes were characterized by MAS 11B NMR spectroscopy and found to contain both BH3 and sp2 hybridized BH groups, consistent with the general structure MN(R) - BHN(R)MBH3. © 2010 American Chemical Society.


Li D.,Pacific Northwest National Laboratory | Li D.,Aerojet Rocketdyne
JOM | Year: 2014

Structure representation and reconstruction at both the mesoscale and microscale are critical in materials design, advanced manufacturing, and multiscale modeling. Structure reconstruction has been applied in different areas of materials science and technology, structural materials, energy materials, geology, hydrology, etc. This review summarizes the descriptors and formulations used to represent structures at the microscale and mesoscale, as well as reconstruction algorithms. In stochastic methods using correlation function, different optimization approaches have been adapted for objective function minimization. A variety of reconstruction approaches is compared for efficiency and accuracy. © 2013 The Minerals, Metals & Materials Society (Outside the U.S.).


Malone E.L.,Pacific Northwest National Laboratory
Weather, Climate, and Society | Year: 2013

Climate change is increasingly recognized as having national security implications, which has prompted dialogue between the climate change and national security communities-with resultant advantages and differences. Climate change research has proven useful to the national security community sponsors in several ways. It has opened security discussions to consider climate as well as political factors in studies of the future. It has encouraged factoring in the stresses placed on societies by climate changes (of any kind) to help assess the potential for state stability. And it has shown that changes such as increased heat, more intense storms, longer periods without rain, and earlier spring onset call for building climate resilience as part of building stability. For the climate change research community, studies from a national security point of view have revealed research lacunae, such as the lack of usable migration studies. This has also pushed the research community to consider second- and third-order impacts of climate change, such as migration and state stability, which broadens discussion of future impacts beyond temperature increases, severe storms, and sea level rise and affirms the importance of governance in responding to these changes. The increasing emphasis in climate change science toward research in vulnerability, resilience, and adaptation also frames what the intelligence and defense communities need to know, including where there are dependencies and weaknesses that may allow climate change impacts to result in security threats and where social and economic interventions can prevent climate change impacts and other stressors from resulting in social and political instability or collapse. © 2013 American Meteorological Society.


Morgan W.F.,Pacific Northwest National Laboratory | Bair W.J.,Pacific Northwest National Laboratory
Radiation Research | Year: 2013

Both natural and man-made sources of ionizing radiation contribute to human exposure and consequently pose a possible risk to human health. Much of this is unavoidable, e.g., natural background radiation, but as the use of radiation increases, so does the potential health risk and the public's concerns. This perspective reflects the authors' view of current issues in low dose radiation biology research, highlights some of the controversies therein, and suggests areas of future research to address both issues in low dose radiation research and the controversies. This is a critical time for the radiation sciences and the implications of future research will have a significant impact on radiation protection, medicine, national security, research and industry. The views expressed here are the authors' own and do not represent any institution, organization or funding body. © 2013 by Radiation Research Society. All rights of reproduction in any form reserved.


Du P.,Pacific Northwest National Laboratory | Lu N.,Pacific Northwest National Laboratory
IEEE Transactions on Smart Grid | Year: 2011

This paper presents a novel appliance commitment algorithm that schedules thermostatically controlled household loads based on price and consumption forecasts considering users' comfort settings to meet an optimization objective such as minimum payment or maximum comfort. The formulation of an appliance commitment problem is described using an electrical water heater load as an example. The thermal dynamics of heating and coasting of the water heater load is modeled by physical models; random hot water consumption is modeled with statistical methods. The models are used to predict the appliance operation over the scheduling time horizon. User comfort is transformed to a set of linear constraints. Then, a novel linear-sequential-optimization-enhanced, multiloop algorithm is used to solve the appliance commitment problem. The simulation results demonstrate that the algorithm is fast, robust, and flexible. The algorithm can be used in home/building energy-management systems to help household owners or building managers to automatically create optimal load operation schedules based on different cost and comfort settings and compare cost/benefits among schedules. © 2011 IEEE.


Rousseau R.,Pacific Northwest National Laboratory | Dixon D.A.,University of Alabama | Kay B.D.,Pacific Northwest National Laboratory | Dohnalek Z.,Pacific Northwest National Laboratory
Chemical Society Reviews | Year: 2014

Supported early transition metal oxides have important applications in numerous catalytic reactions. In this article, we review the synthesis and activity of well-defined model WO3 and MoO3 catalysts that are prepared via deposition of cyclic gas-phase (WO3)3 and (MoO3)3 clusters generated by sublimation of WO3 and MoO3 powders. Conversion of small aliphatic alcohols to alkenes, aldehydes/ketones, and ethers is employed to probe the structure-activity relationships on model catalysts ranging from unsupported (WO3)3 and (MoO3)3 clusters embedded in alcohol matrices, to (WO3)3 clusters supported on surfaces of other oxides, and epitaxial and nanoporous WO3 films. Detailed theoretical calculations reveal the underlying reaction mechanisms and provide insight into the origin of the differences in the WO3 and MoO3 reactivity. The catalytic activity for a range of interrogated (WO3)3 motifs (from unsupported clusters to nanoporous films) further sheds light onto the role structure and binding of (WO3)3 clusters with the support play in determining their catalytic activity. © the Partner Organisations 2014.


Zacher A.H.,Pacific Northwest National Laboratory | Olarte M.V.,Pacific Northwest National Laboratory | Santosa D.M.,Pacific Northwest National Laboratory | Elliott D.C.,Pacific Northwest National Laboratory | Jones S.B.,Pacific Northwest National Laboratory
Green Chemistry | Year: 2014

The pathway for catalytic hydrodeoxygenation (HDO) of biomass-derived fast pyrolysis oil represents a compelling route for production of liquid transportation fuels. This is a review of the published research and patent literature in bio-oil HDO over the last 6 years performed with actual bio-oils and identifiable strategy for production of an infrastructure compatible liquid transportation fuel. Research is moving towards continuous, industrially relevant processes generating data to inform techno-economic analysis (TEA) and understand the nature of the fuels produced. Research gaps identified include: (1) focus on process integration; (2) developing appropriate quality metrics for intermediates; (3) evaluating research by TEA; and (4) meeting fuel functional requirements and comparison to ASTM standards for existing fuels. © 2014 The Royal Society of Chemistry.


Sun J.,Washington State University | Wang Y.,Washington State University | Wang Y.,Pacific Northwest National Laboratory
ACS Catalysis | Year: 2014

With increased availability and decreased cost, ethanol is potentially a promising platform molecule for the production of a variety of value-added chemicals. In this review, we provide a detailed summary of recent advances in catalytic conversion of ethanol to a wide range of chemicals and fuels. We particularly focus on catalyst advances and fundamental understanding of reaction mechanisms involved in ethanol steam reforming (ESR) to produce hydrogen, ethanol conversion to hydrocarbons ranging from light olefins to longer chain alkenes/alkanes and aromatics, and ethanol conversion to other oxygenates including 1-butanol, acetaldehyde, acetone, diethyl ether, and ethyl acetate. © 2014 American Chemical Society.


Wang P.,Pacific Northwest National Laboratory | Tartakovsky A.M.,Pacific Northwest National Laboratory | Tartakovsky D.M.,University of California at San Diego
Physical Review Letters | Year: 2013

Understanding the mesoscopic behavior of dynamical systems described by Langevin equations with colored noise is a fundamental challenge in a variety of fields. We propose a new approach to derive closed-form equations for joint and marginal probability density functions of state variables. This approach is based on a so-called large-eddy-diffusivity closure and can be used to model a wide class of non-Markovian processes described by the noise with an arbitrary correlation function. We demonstrate the accuracy of the proposed probability density function method for several linear and nonlinear Langevin equations. © 2013 American Physical Society.


Bandara U.C.,Pacific Northwest National Laboratory | Tartakovsky A.M.,Pacific Northwest National Laboratory | Palmer B.J.,Pacific Northwest National Laboratory
International Journal of Greenhouse Gas Control | Year: 2011

Geological sequestration of CO 2 gas has emerged as a promising solution for reducing the amount of greenhouse gases in atmosphere. A number of continuum scale models are available to describe the transport phenomena of CO 2 sequestration. These models rely heavily on a phenomenological description of subsurface transport phenomena and the predictions can be highly uncertain. "First-principle" pore-scale models provide a better understanding of fluid displacement processes. In this work we use a Smoothed Particle Hydrodynamics (SPH) model to study pore-scale displacement and capillary trapping mechanisms of super-critical CO 2 in the subsurface. Simulations are carried out to investigate the effects of gravitational, viscous, and capillary forces on the amount of trapped CO 2 in terms of non-dimensional numbers. We found that the displacement patterns and the amount of trapped CO 2 depends mainly on Capillary and Gravity numbers. For large Gravity numbers, most of the injected CO 2 reaches the cap-rock due to gravity separation. A significant portion of CO 2 gets trapped by capillary forces when the Gravity number is small. When the Gravity number is moderately high, trapping patterns are heavily dependent on the Capillary number. If the Capillary number is very small, then capillary forces dominate the buoyancy forces and a significant fraction of injected CO 2 is trapped by the capillary forces. Conversely, if the Capillary number is high, trapping is relatively small since buoyancy dominates the capillary forces. © 2011 Elsevier Ltd.


Lin S.S.-Y.,Washington State University | Kim D.H.,Pacific Northwest National Laboratory | Engelhard M.H.,Pacific Northwest National Laboratory | Ha S.Y.,Washington State University
Journal of Catalysis | Year: 2010

The water-induced formation of cobalt oxides by the re-oxidation of metallic cobalt in pre-reduced 10% Co/CeO2-ZrO2 catalyst was verified by in situ temperature-programmed reduction (TPR) and in situ X-ray Photoelectron Spectroscopy (XPS) studies under various ethanol-steam conditions at 450 °C. The formation and transformation of water-induced cobalt oxide species during the reaction were influenced by the pre-reduction conditions as well as the feed stream composition. Our results suggest that the surface composition of the cobalt species (e.g. Co, CoO, and CoOx) in the 10% Co/CeO2-ZrO2 catalyst, initially determined by the catalyst pre-treatment conditions, changed toward an equilibrium state governed by the feed stream composition as the reaction proceeded. In addition, the reducibility of the ceria sites may play a significant role in the formation of such water-induced cobalt oxide species, as the redox process involves both cobalt and ceria sites in the ethanol-steam environment. Finally, the effect of the water-induced cobalt oxides in the 10% Co/CeO2-ZrO2 catalyst was investigated under ethanol and ethanol-steam conditions, particularly for the carbon-carbon (C-C) bond cleavage of ethanol. The water-induced formation of cobalt oxides in the catalyst was not observed to affect the cleavage, in either ethanol-rich (e.g. ethanol decomposition) or steam-rich (e.g. ethanol steam reforming) conditions. This result could be explained by our in situ studies, which revealed the redox exchange between reduced cobalt and oxidized cobalt in a feed stream with a changing water-to-ethanol ratio. © 2010 Elsevier Inc. All rights reserved.


He J.,TU Munich | Zhao C.,TU Munich | Mei D.,Pacific Northwest National Laboratory | Lercher J.A.,TU Munich | Lercher J.A.,Pacific Northwest National Laboratory
Journal of Catalysis | Year: 2014

A route for cleaving the C-O aryl ether bonds of p-substituted H-, CH 3-, and OH- diphenyl ethers has been explored over Ni/SiO2 catalyst at very mild conditions (393 K, 0.6 MPa). The C-O bond of diphenyl ether is cleaved by parallel hydrogenolysis and hydrolysis (hydrogenolysis combined with HO* addition) on Ni. The rates as a function of H2 pressure from 0 to 10 MPa indicate that the rate-determining step is the C-O bond cleavage on Ni surface. H* atoms compete with the organic reactant for adsorption leading to a maximum in the rate with increasing H2 pressure. In contrast to diphenyl ether, hydrogenolysis is the exclusive route for cleaving a C-O bond of di-p-tolyl ether to form p-cresol and toluene. 4,4′-Dihydroxydiphenyl ether undergoes sequential surface hydrogenolysis, first to phenol and OC6H4OH * (adsorbed), which is then cleaved to phenol (C 6H4OH* with added H*) and H2O (O* with two added H*) in a second step. Density function theory supports the operation of this pathway. Notably, addition of H* to OC6H4OH * is less favorable than a further hydrogenolytic C-O bond cleavage. The TOFs of three diaryl ethers with Ni/SiO2 in water follow the order 4,4′-dihydroxydiphenyl ether 69molmolNi Surf-1h -1 > diphenyl ether 26molmolNi Surf-1h-1 > di-p-tolyl ether 1.3molmolNi Surf-1h-1, in line with the increasing apparent activation energies, ranging from 4,4′-dihydroxydiphenyl ether (93 kJ mol-1) < diphenyl ether (98 kJ mol-1) < di-p-tolyl ether (105 kJ mol-1). © 2013 Elsevier Inc. All rights reserved.


Wang X.,Oak Ridge National Laboratory | Lee J.S.,Oak Ridge National Laboratory | Zhu Q.,Oak Ridge National Laboratory | Liu J.,Pacific Northwest National Laboratory | And 2 more authors.
Chemistry of Materials | Year: 2010

Ammonia treatment of ordered mesoporous carbons at high temperature results in the formation of nitrogen-doped carbon materials that exhibit significant activity for oxygen reduction reaction in acidic media. The new material shows an onset potential of only 80 mV below that of a commercial Pt-based catalyst, but a better stability. © 2010 American Chemical Society.


Kwak J.H.,Pacific Northwest National Laboratory | Tonkyn R.G.,Pacific Northwest National Laboratory | Kim D.H.,Pacific Northwest National Laboratory | Szanyi J.,Pacific Northwest National Laboratory | Peden C.H.F.,Pacific Northwest National Laboratory
Journal of Catalysis | Year: 2010

Superior activity and selectivity of a Cu ion-exchanged SSZ-13 zeolite in the selective catalytic reduction (SCR) of NOx with NH3 were observed, in comparison with Cu-beta and Cu-ZSM-5 zeolites. Cu-SSZ-13 was not only more active in the NOx SCR reaction over the entire temperature range studied (up to 550 °C), but also more selective toward nitrogen formation, resulting in significantly lower amounts of NOx by-products (i.e., NO2 and N2O) than the other two zeolites. In addition, Cu-SSZ-13 demonstrated the highest activity and N 2 formation selectivity in the oxidation of NH3. The results of this study strongly suggest that Cu-SSZ-13 is a promising candidate as a catalyst for NOx SCR with great potential in after-treatment systems for either mobile or stationary sources. © 2010 Elsevier Inc. All rights reserved.


Klimont Z.,International Institute For Applied Systems Analysis | Smith S.J.,Pacific Northwest National Laboratory | Cofala J.,International Institute For Applied Systems Analysis
Environmental Research Letters | Year: 2013

The evolution of global and regional anthropogenic SO2 emissions in the last decade has been estimated through a bottom-up calculation. After increasing until about 2006, we estimate a declining trend continuing until 2011. However, there is strong spatial variability, with North America and Europe continuing to reduce emissions, with an increasing role of Asia and international shipping. China remains a key contributor, but the introduction of stricter emission limits followed by an ambitious program of installing flue gas desulfurization on power plants resulted in a significant decline in emissions from the energy sector and stabilization of total Chinese SO 2 emissions. Comparable mitigation strategies are not yet present in several other Asian countries and industrial sectors in general, while emissions from international shipping are expected to start declining soon following an international agreement to reduce the sulfur content of fuel oil. The estimated trends in global SO2 emissions are within the range of representative concentration pathway (RCP) projections and the uncertainty previously estimated for the year 2005. © 2013 IOP Publishing Ltd.


Xiao J.,Pacific Northwest National Laboratory | Chernova N.A.,Binghamton University State University of New York | Stanley Whittingham M.,Binghamton University State University of New York
Chemistry of Materials | Year: 2010

The layered oxide cathode material LiMO2, where M = Ni 0.9.yMnyCo0.1 and 0.45 ≤ y ≤ 0.60, was synthesized by a coprecipitation method. X-ray diffraction analysis shows that the maximum manganese content in the stoichiometric material, i.e. with Li:M = 1. cannot exceed 50%; otherwise, a second phase is formed. Rietveld refinement reveals that increasing manganese content suppresses the disorder between the lithium and nickel ions. Magnetic measurements show that part of the Mn 4+ ions in the manganese rich compounds is reduced to Mn 3+; this results in a larger hysteresis loop due to the increased magnetic moment of the resulting ferrimagnctically ordered clusters. LiNi 0.4Mn0.5Co0.1O2 and LiNi 0.45Mn0.45 Co0.1O2 show similar electrochemical capacities of around 180 mAh/'g (between 2.5 and 4.6 V at 0.5 mA/cm2) for the first discharge. However, subsequent cycling of LiNi0.4Mn0.5Co0.1O2 results in faster capacity loss and poorer rate capability indicating that manganese rich compounds, with Li:M = 1:1, are probably not suitable candidates for lithium batteries. © 2009 American Chemical Society.


Lemoine D.,University of Arizona | McJeon H.C.,Pacific Northwest National Laboratory
Environmental Research Letters | Year: 2013

Climate change policies must trade off uncertainties about future warming, about the social and ecological impacts of warming, and about the cost of reducing greenhouse gas emissions. We show that laxer carbon targets produce broader distributions for climate damages, skewed towards severe outcomes. However, if potential low-carbon technologies fill overlapping niches, then more stringent carbon targets produce broader distributions for the cost of reducing emissions, skewed towards high-cost outcomes. We use the technology-rich GCAM integrated assessment model to assess the robustness of 450 and 500 ppm carbon targets to each uncertain factor. The 500 ppm target provides net benefits across a broad range of futures. The 450 ppm target provides net benefits only when impacts are greater than conventionally assumed, when multiple technological breakthroughs lower the cost of abatement, or when evaluated with a low discount rate. Policy evaluations are more sensitive to uncertainty about abatement technology and impacts than to uncertainty about warming. © 2013 IOP Publishing Ltd.


Kwak J.H.,Pacific Northwest National Laboratory | Kovarik L.,Pacific Northwest National Laboratory | Szanyi J.,Pacific Northwest National Laboratory
ACS Catalysis | Year: 2013

Because of their heterogeneous nature, supported metal catalysts always contain metal centers in a rather broad dispersion range, and the presence of even atomically dispersed metals has been reported on oxide supports. The role of the atomically dispersed metal centers in the overall catalytic performances of these supported metal catalysts, however, has not been addressed to date. In this study, temperature programmed reaction and scanning transmission electron microscopy experiments were applied to show the fundamentally different reactivity patterns exhibited by Pd metal in atomically dispersed and traditional 3D clusters in the demanding reaction of CO2 reduction. The requirement for two different catalyst functionalities in the reduction of CO2 with hydrogen on Pd/Al2O3 and Pd/MWCNT catalysts was also substantiated. The results obtained clearly show that the oxide support material, even when it is considered inert like Al 2O3, can function as a critical, active component of complex catalyst systems. © 2013 American Chemical Society.


Kathmann S.M.,Pacific Northwest National Laboratory | Kuo I.-F.W.,Lawrence Livermore National Laboratory | Mundy C.J.,Pacific Northwest National Laboratory | Schenter G.K.,Pacific Northwest National Laboratory
Journal of Physical Chemistry B | Year: 2011

We have resolved the inconsistency in quantifying the surface potential at the liquid-vapor interface when using explicit ab initio electronic charge density and effective atomic partial charge models of liquid water. This is related, in part, to the fact that the resulting electric potentials from partial-charge models and ab initio charge distributions are quite different except for those regions of space between the molecules. We show that the electrostatic surface potential from a quantum mechanical charge distribution compares well to high-energy electron diffraction and electron holography measurements, as opposed to the comparison with electrochemical measurements. We suggest that certain regions of space be excluded when comparing computed surface potentials with electrochemical measurements. This work describes a novel interpretation of ab initio computed surface potentials through high-energy electron holography measurements as useful benchmarks toward a better understanding of electrochemistry. © 2011 American Chemical Society.


Johnson G.E.,Pacific Northwest National Laboratory | Hu Q.,Pacific Northwest National Laboratory | Laskin J.,Pacific Northwest National Laboratory
Annual Review of Analytical Chemistry | Year: 2011

Soft and reactive landing of mass-selected ions onto surfaces has become a topic of substantial interest due to its promising potential for the highly controlled preparation of materials. For example, there are possible applications in the production of peptide and protein microarrays for use in high-throughput screening, protein separation and conformational enrichment of peptides, redox protein characterization, thin-film production, and the preparation of catalysts through deposition of clusters and organometallic complexes. Soft landing overcomes many of the limitations associated with conventional thin-film production techniques and offers unprecedented selectivity and specificity of preparation of deposited species. This review discusses the fundamental aspects of soft and reactive landing of mass-selected ions on surfaces that pertain to applications of these techniques in biomaterials, molecular electronics, catalysis, and interfacial chemistry. Copyright © 2011 by Annual Reviews. All rights reserved.


Laskin J.,Pacific Northwest National Laboratory
International Journal of Mass Spectrometry | Year: 2015

In this work, resonant ejection coupled with surface-induced dissociation (SID) in a Fourier transform ion cyclotron resonance mass spectrometer is used to examine fragmentation kinetics of two singly protonated hexapeptides, RYGGFL and KYGGFL, containing the basic arginine residue and less basic lysine residue at the N-terminus. The kinetics of individual reaction channels at different collision energies are probed by applying a short ejection pulse (1 ms) in resonance with the cyclotron frequency of a selected fragment ion and varying the delay time between ion-surface collision and resonant ejection while keeping total reaction delay time constant. Rice-Ramsperger-Kassel-Marcus (RRKM) modeling of the experimental data provides accurate threshold energies and activation entropies of individual reaction channels. Substitution of arginine with less basic lysine has a pronounced effect on the observed fragmentation kinetics of several pathways, including the b2 ion formation, but has little or no effect on formation of the b5+H2O fragment ion. The combination of resonant ejection SID, time- and collision energy-resolved SID, and RRKM modeling of both types of experimental data provides a detailed mechanistic understanding of the primary dissociation pathways of complex gaseous ions. © 2015 Elsevier B.V. All rights reserved.


Xiao J.,Pacific Northwest National Laboratory | Choi D.,Pacific Northwest National Laboratory | Cosimbescu L.,Pacific Northwest National Laboratory | Koech P.,Pacific Northwest National Laboratory | And 2 more authors.
Chemistry of Materials | Year: 2010

Disordered nanocomposites comprised of polyethylene oxide and exfoliated MoS2 are cycled in the voltage range of 0.01-3.0 V. The nanocomposites showed a significant increase in reversible capacity and cycle stability when compared to exfoliated or bulk MoS2 and may be used as anode material in lithium ion batteries. © 2010 American Chemical Society.


Autrey T.,Pacific Northwest National Laboratory | Bowden M.,Pacific Northwest National Laboratory | Karkamkar A.,Pacific Northwest National Laboratory
Faraday Discussions | Year: 2011

Molecular complexes of Lewis acid-base pairs can be used to activate molecular hydrogen for applications ranging from hydrogen storage for fuel cells to catalytic hydrogenation reactions. In this paper, we examine the factors that determine the thermodynamics of hydrogen activation of a Lewis acid-base pair using the pedagogical examples of ammonia borane (NH 3BH 3, AB) and ammonium borohydride ([NH 4][BH 4], ABH2). At ambient temperatures, ABH2 loses hydrogen to form the Lewis acid-base complex AB, suggesting that free energy drives the reaction to release hydrogen. However, direct measurement of the reaction enthalpy is not straightforward given the complex decomposition pathways leading to the formation of the diammoniate of diborane ([NH 3BH 2NH 3][BH 4], DADB). In this work, we compare two approaches for deriving the thermodynamic relationships among AB, DADB, and ABH2. © The Royal Society of Chemistry 2011.


Stoliker D.L.,U.S. Geological Survey | Liu C.,Pacific Northwest National Laboratory | Kent D.B.,U.S. Geological Survey | Zachara J.M.,Pacific Northwest National Laboratory
Water Resources Research | Year: 2013

Rates of U(VI) release from individual dry-sieved size fractions of a field-aggregated, field-contaminated composite sediment from the seasonally saturated lower vadose zone of the Hanford 300-Area were examined in flow-through reactors to maintain quasi-constant chemical conditions. The principal source of variability in equilibrium U(VI) adsorption properties of the various size fractions was the impact of variable chemistry on adsorption. This source of variability was represented using surface complexation models (SCMs) with different stoichiometric coefficients with respect to hydrogen ion and carbonate concentrations for the different size fractions. A reactive transport model incorporating equilibrium expressions for cation exchange and calcite dissolution, along with rate expressions for aerobic respiration and silica dissolution, described the temporal evolution of solute concentrations observed during the flow-through reactor experiments. Kinetic U(VI) desorption was well described using a multirate SCM with an assumed lognormal distribution for the mass-transfer rate coefficients. The estimated mean and standard deviation of the rate coefficients were the same for all <2 mm size fractions but differed for the 2-8 mm size fraction. Micropore volumes, assessed using t-plots to analyze N2 desorption data, were also the same for all dry-sieved <2 mm size fractions, indicating a link between micropore volumes and mass-transfer rate properties. Pore volumes for dry-sieved size fractions exceeded values for the corresponding wet-sieved fractions. We hypothesize that repeated field wetting and drying cycles lead to the formation of aggregates and/or coatings containing (micro)pore networks which provided an additional mass-transfer resistance over that associated with individual particles. The 2-8 mm fraction exhibited a larger average and standard deviation in the distribution of mass-transfer rate coefficients, possibly caused by the abundance of microporous basaltic rock fragments. © 2013. American Geophysical Union. All Rights Reserved.


Runkle R.C.,National Nuclear Security Administration | Runkle R.C.,Pacific Northwest National Laboratory
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment | Year: 2011

Perhaps the most familiar application of neutron detection technology to nonproliferation resides in materials accounting, where the quantification of plutonium has a rich history. With a changing dynamic in nuclear security, the application of sensor technology to further other nonproliferation objectives has received considerable attention. This fact, amplified by a dwindling supply of 3He, has stimulated considerable interest in neutron detection technology development for applications ranging from interdicting smuggled nuclear material to the verification of stockpile reductions. This manuscript briefly overviews the application of neutron sensors to nonproliferation and examines three specific examples that highlight the constraints applied to field-deployed technology. © 2011 Elsevier B.V.


Zhou Y.,Pacific Northwest National Laboratory | Smith S.J.,Pacific Northwest National Laboratory
Environmental Research Letters | Year: 2013

Wind power, a renewable energy source, can play an important role in electrical energy generation. Information regarding wind energy potential is important both for energy related modeling and for decision-making in the policy community. While wind speed datasets with high spatial and temporal resolution are often ultimately used for detailed planning, simpler assumptions are often used in analysis work. An accurate representation of the wind speed frequency distribution is needed in order to properly characterize wind energy potential. Using a power density method, this study estimated global variation in wind parameters as fitted to a Weibull density function using NCEP/climate forecast system reanalysis (CFSR) data over land areas. The Weibull distribution performs well in fitting the time series wind speed data at most locations according to R2, root mean square error, and power density error. The wind speed frequency distribution, as represented by the Weibull k parameter, exhibits a large amount of spatial variation, a regionally varying amount of seasonal variation, and relatively low decadal variation. We also analyzed the potential error in wind power estimation when a commonly assumed Rayleigh distribution (Weibull k = 2) is used. We find that the assumption of the same Weibull parameter across large regions can result in non-negligible errors. While large-scale wind speed data are often presented in the form of mean wind speeds, these results highlight the need to also provide information on the wind speed frequency distribution. © 2013 IOP Publishing Ltd.


Lanekoff I.,Pacific Northwest National Laboratory | Thomas M.,Pacific Northwest National Laboratory | Laskin J.,Pacific Northwest National Laboratory
Analytical Chemistry | Year: 2014

Mass spectrometry imaging (MSI) has been extensively used for determining spatial distributions of molecules in biological samples, and there is increasing interest in using MSI for quantification. Nanospray desorption electrospray ionization (nano-DESI) is an ambient MSI technique where a solvent is used for localized extraction of molecules followed by nanoelectrospray ionization. Doping the nano-DESI solvent with carefully selected standards enables online quantification during MSI experiments. In this proof-of-principle study, we demonstrate that this quantification approach can be extended to provide shotgun-like quantification of phospholipids in thin brain tissue sections. Specifically, two phosphatidylcholine (PC) standards were added to the nano-DESI solvent for simultaneous imaging and quantification of 22 endogenous PC species observed in nano-DESI MSI. Furthermore, by combining the quantitative data obtained in the individual pixels, we demonstrate quantification of these PC species in seven different regions of a rat brain tissue section. © 2014 American Chemical Society.


Robinson J.,Rutgers University | Johnson T.,Pacific Northwest National Laboratory | Slater L.,Rutgers University
Geophysics | Year: 2013

There is a need to better characterize discrete fractures in contaminated hard rock aquifers to determine the fate of remediation injections away from boreholes and also to evaluate hydraulic fracturing performance. A synthetic cross-borehole electrical resistivity study was conducted assuming a discrete fracture model of an existing contaminated site with known fracture locations. Four boreholes and two discrete fracture zones, assumed to be the dominant electrical and hydraulically conductive pathways, were explicitly modeled within an unstructured tetrahedral mesh.We first evaluated different regularization constraints starting with an uninformed smoothness-constrained inversion, to which a priori information was incrementally added. We found major improvements when (1) smoothness regularization constraints were relaxed (or disconnected) along boreholes and fractures, (2) a homogeneous conductivity was assumed along boreholes, and (3) borehole conductivity constraints that could be determined from a specific conductance log were applied.We also evaluated the effect of including borehole packers on fracture zone model recovery. We found that the fracture zone conductivities with the inclusion of packers were comparable to similar trials excluding the use of packers regardless of electrical potential changes. The misplacement of fracture regularization disconnects (FRDs) can easily be misinterpreted as actual fracture locations. Conductivities within these misplaced disconnects were near the starting model value, and removing smoothing between boreholes and assumed fracture locations helped in identifying incorrectly located FRDs. We found that structural constraints used after careful evaluation of a priori information are critical to improve imaging of fracture electrical conductivities, locations, and orientations. © 2013 Society of Exploration Geophysicists.


Kerisit S.,Pacific Northwest National Laboratory | Bylaska E.J.,Pacific Northwest National Laboratory | Felmy A.R.,Pacific Northwest National Laboratory
Chemical Geology | Year: 2013

Divalent cation-rich silicate minerals such as olivines have a high potential for carbonation reaction and thus their reactivity with water/carbon dioxide mixed fluids is critical to understanding chemical interactions relevant to carbon dioxide capture and sequestration. Therefore, plane-wave density functional theory (DFT) calculations were performed to simulate water and carbon dioxide adsorption at the (010) surface of five olivine minerals, namely, forsterite (Mg2SiO4), calcio-olivine (Ca2SiO4), tephroite (Mn2SiO4), fayalite (Fe2SiO4), and Co-olivine (Co2SiO4). Adsorption energies per water molecule obtained from energy minimizations varied from -100kJmol-1 for tephroite to -148kJmol-1 for calcio-olivine at sub-monolayer coverage and became less exothermic as coverage increased. In contrast, carbon dioxide adsorption energies at sub-monolayer coverage ranged from -44kJmol-1 for fayalite to -83kJmol-1 for calcio-olivine. Therefore, the DFT calculations show a strong driving force for carbon dioxide displacement by water at the surface of all olivine minerals in a competitive adsorption scenario. Additionally, adsorption energies for both water and carbon dioxide were found to be more exothermic for the alkaline-earth (AE) olivines than for the transition-metal (TM) olivines and to not correlate with the solvation enthalpies of the corresponding divalent cations. However, a correlation was obtained with the charge of the surface divalent cation indicating that the more ionic character of the AE cations in the olivine structure relative to the TM cations leads to greater interactions with adsorbed water and carbon dioxide molecules at the surface and thus more exothermic adsorption energies for the AE olivines. For calcio-olivine, which exhibits the highest divalent cation charge of the five olivines, ab initio molecular dynamics simulations showed that this effect leads both water and carbon dioxide to react with the surface and form hydroxyl groups and a carbonate-like species, respectively. © 2013 Elsevier B.V.


Ghan S.J.,Pacific Northwest National Laboratory
Atmospheric Chemistry and Physics | Year: 2013

Estimating anthropogenic aerosol effects on the planetary energy balance through the aerosol influence on clouds using the difference in cloud radiative forcing from simulations with and without anthropogenic emissions produces estimates that are positively biased. A more representative method is suggested using the difference in cloud radiative forcing calculated as a diagnostic with aerosol scattering and absorption neglected. The method also yields an aerosol radiative forcing decomposition that includes a term quantifying the impact of changes in surface albedo. The method requires only two additional diagnostic calculations: the whole-sky and clear-sky top-of-atmosphere radiative flux with aerosol scattering and absorption neglected. © Author(s) 2013.


Kalenderski S.,King Abdullah University of Science and Technology | Stenchikov G.,King Abdullah University of Science and Technology | Zhao C.,Pacific Northwest National Laboratory
Atmospheric Chemistry and Physics | Year: 2013

We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ∼2.4 Tg day-1 and ∼1.5 Tg day-1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground-and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3-4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD) to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m-2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea. © 2013 Author(s).


Taylor R.C.,Pacific Northwest National Laboratory
BMC Bioinformatics | Year: 2010

Background: Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce.Description: An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.Conclusions: Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms. © 2010 Taylor; licensee BioMed Central Ltd.


Hammond G.E.,Pacific Northwest National Laboratory | Lichtner P.C.,Los Alamos National Laboratory
Water Resources Research | Year: 2010

High-resolution, three-dimensional, reactive flow and transport simulations are carried out to describe the migration of hexavalent uranium [U(VI)] at the Hanford 300 Area bordering the Columbia River and to better understand the persistence of the uranium plume at the site. The computer code PFLOTRAN developed under a DOE SciDAC-2 project is employed in the simulations that are executed on ORNL's Cray XT4/XT5 supercomputer Jaguar. The conceptual model used in the simulations is based on the recognition of three distinct phases or time periods in the evolution of the U(VI) plume. These correspond to (1) initial waste emplacement; (2) initial presence of both labile and nonlabile U(VI) with an evolved U(VI) plume extending from the source region to the river boundary, representing present-day conditions; and (3) the complete removal of all nonlabile U(VI) and labile U(VI) in the vadose zone. This work focuses primarily on modeling Phase II using equilibrium and multirate sorption models for labile U(VI) and a continuous source release of nonlabile U(VI) in the South Process Pond through dissolution of metatorbernite as a surrogate mineral. For this case, rapid fluctuations in the Columbia River stage combined with the slow release of nonlabile U(VI) from contaminated sediment are found to play a predominant role in determining the migration behavior of U(VI) with sorption only a second-order effect. Nevertheless, a multirate model was essential in explaining breakthrough curves obtained from laboratory column experiments using the same sediment and is demonstrated to be important in Phase III. The calculations demonstrate that U(VI) is discharged to the river at a highly fluctuating rate in a ratchet-like behavior as the river stage rises and falls. The high-frequency fluctuations must be resolved in the model to calculate the flux of U(VI) at the river boundary. By time averaging the instantaneous flux to average out noise superimposed on the river stage fluctuations, the cumulative U(VI) flux to the river is found to increase approximately linearly with time. The flow rate and U(VI) flux are highly sensitive to the conductance boundary condition that describes the river-sediment interface. By adjusting the conductance coefficient to give a better match to the measured piezometric head, good agreement was obtained with field studies for both the mean flux of water of 109 kg/yr and U(VI) of 25 kg/yr at the river-aquifer boundary for a computational domain encompassing the South Process Pond. Finally, it is demonstrated that, through global mass conservation, the U(VI) leach rate from the source region is related to the U(VI) flux at the river boundary. Copyright 2010 by the American Geophysical Union.


Wogman N.A.,Pacific Northwest National Laboratory
Journal of Radioanalytical and Nuclear Chemistry | Year: 2013

The International Atomic Energy Agency (IAEA) is committed to strengthening and streamlining the overall effectiveness of the IAEA safeguards system within the context of the Non-Proliferation Treaty (NPT). The IAEA has investigated the use of environmental monitoring techniques and a variety of techniques were studied as part of extensive field trials. The efficacy of long-range monitoring depends on the availability of mobile signature isotopes or compounds and on the ability to distinguish the nuclear signatures from background signals and attribute them to a source. The Comprehensive Nuclear Test Ban Treaty (CTBT) also requires a variety of environmental sampling and analysis techniques. This paper serves as a scientific basis to start discussions of environmental sampling techniques that could be considered for wide-area monitoring for the detection of undeclared nuclear activities within the NPT or for the possible future use within the CTBT. © Akadémiai Kiadó, Budapest, Hungary 2012.


Dooley J.J.,Pacific Northwest National Laboratory
International Journal of Greenhouse Gas Control | Year: 2011

By their very nature, early national and basin scale assessments of geologic carbon dioxide (CO2) storage capacity must rely on simplifying assumptions and generalizations across a broad range of deep geologic structures. Key aspects of the technical literature and much of the public policy dialogue surrounding these assessments tend to emphasize the lack of detailed data and uncertainties at these scales. However, looking beyond the imperfections of data and methodology, the results of such assessments offer significant value in helping us to understand the potential for carbon dioxide capture and storage (CCS) technologies to deploy across various regions of the world. © 2010.


He J.,TU Munich | Zhao C.,TU Munich | Lercher J.A.,TU Munich | Lercher J.A.,Pacific Northwest National Laboratory
Journal of Catalysis | Year: 2014

Impacts of water, methanol, and hexadecane solvents on the individual steps of phenol hydrodeoxygenation are investigated over Pd/C and HZSM-5 catalyst components at 473 K in presence of H2. Hydrodeoxygenation of phenol to cyclohexane includes four individual steps of phenol hydrogenation to cyclohexanone on Pd/C, cyclohexanone hydrogenation to cyclohexanol on Pd/C, cyclohexanol dehydration to cyclohexene on HZSM-5, and cyclohexene hydrogenation to cyclohexane on Pd/C. Individual phenol and cyclohexanone hydrogenation rates are much lower in methanol and hexadecane than in water, while rates of cyclohexanol dehydration and cyclohexene hydrogenation are similar in three solvents. The slow rate in methanol is due to the strong solvation of reactants and the adsorption of methanol on Pd, as well as to the reaction between methanol and the cyclohexanone intermediate. The low solubility of phenol and strong interaction of hexadecane with Pd lead to the slow rate in hexadecane. The apparent activation energies for hydrogenation follow the order E a phenol > Ea cyclohexanone > E a cyclohexene, and the sequences of individual reaction rates are reverse in three solvents. The dehydration rates 1.1-1.8×10 3molmolBAS-1h-1 and apparent activation energies (115-124 kJ mol-1) are comparable in three solvents. In situ liquid-phase IR spectroscopy shows the rates consistent with kinetics derived from chromatographic evidence in the aqueous phase and verifies that hydrogenation of phenol and cyclohexanone follows reaction orders of 1.0 and 0.55 over Pd/C, respectively. Conversion of cyclohexanol with HZSM-5 shows first-order dependence in approaching the dehydration-hydration equilibrium in the aqueous phase. © Published by Elsevier Inc.


Smith J.L.,Washington State University | Collins H.P.,U.S. Department of Agriculture | Bailey V.L.,Pacific Northwest National Laboratory
Soil Biology and Biochemistry | Year: 2010

The low temperature pyrolysis of organic material produces biochar, a charcoal like substance. Biochar is being promoted as a soil amendment to enhance soil quality, it is also seen as a mechanism of long-term sequestration of carbon. Our experiments tested the hypothesis that biochar is inert in soil. However, we measured an increase in CO2 production from soils after biochar amendment which increased with increasing rates of biochar. The ∂13C signature of the CO2 evolved in the first several days of the incubation was the same as the ∂13C signature of the biochar, confirming that biochar contributed to the CO2 flux. This effect diminished by day 6 of the incubation suggesting that most of the biochar C is slowly decomposing. Thus, aside from this short-term mineralization increasing soil C with young biochar may indeed be a long-term C storage mechanism. © 2010.


Zhang S.,Harbin Institute of Technology | Zhang S.,Pacific Northwest National Laboratory | Shao Y.,Pacific Northwest National Laboratory | Yin G.,Harbin Institute of Technology | Lin Y.,Pacific Northwest National Laboratory
Angewandte Chemie - International Edition | Year: 2010

"Chemical Equation Presented" Opposites attract: A Pt-around-Au nanocomposite is synthesized by electrostatic self-assembly (see picture; PDDA = poly (diallyldimethylammonium chloride)). This catalyst shows significantly improved activity towards formic acid oxidation versus pure Pt catalysts. The possible reason is the efficient spillover of HCOO from Au to the surrounding Pt nanoparticles, where HCOO is further oxidized to CO2. © 2010 Wiley-VCH Verlag GmbH & Co. KGaA.


Yoon K.J.,Pacific Northwest National Laboratory | Coyle C.A.,Pacific Northwest National Laboratory | Marina O.A.,Pacific Northwest National Laboratory
Electrochemistry Communications | Year: 2011

A new high performance ceramic solid oxide fuel cell (SOFC) anode, Ca- and Co-doped yttrium chromite (YCCC)-samaria-doped ceria (SDC) composite, that resists deactivation by sulfur and does not show degradation during multiple reduction-oxidation cycles is developed. The electrocatalytic activity of the YCCC- SDC anodes in yttria-stabilized zirconia (YSZ) electrolyte-supported cells toward hydrogen oxidation is comparable to that of the Ni/YSZ anode. YCCC-SDC exhibits superior sulfur tolerance showing less than 10% increase in electrode resistance, fully reversible, upon exposure to 20 ppm H 2S. The excellent redox tolerance is attributed to the dimensional and chemical stability of the YCCC exhibiting minimal isothermal "chemical" expansion. © 2011 Elsevier B.V. All rights reserved.


Baer M.D.,Pacific Northwest National Laboratory | Kuo I.-F.W.,Lawrence Livermore National Laboratory | Tobias D.J.,University of California at Irvine | Mundy C.J.,Pacific Northwest National Laboratory
Journal of Physical Chemistry B | Year: 2014

The propensities of the water self-ions, H3O+ and OH-, for the air-water interface have implications for interfacial acid-base chemistry. Despite numerous experimental and computational studies, no consensus has been reached on the question of whether or not H 3O+ and/or OH- prefer to be at the water surface or in the bulk. Here we report a molecular dynamics simulation study of the bulk vs interfacial behavior of H3O+ and OH - that employs forces derived from density functional theory with a generalized gradient approximation exchange-correlation functional (specifically, BLYP) and empirical dispersion corrections. We computed the potential of mean force (PMF) for H3O+ as a function of the position of the ion in the vicinity of an air-water interface. The PMF suggests that H3O+ has equal propensity for the interface and the bulk. We compare the PMF for H3O+ to our previously computed PMF for OH- adsorption, which contains a shallow minimum at the interface, and we explore how differences in solvation of each ion at the interface vs in the bulk are connected with interfacial propensity. We find that the solvation shell of H3O+ is only slightly dependent on its position in the water slab, while OH- partially desolvates as it approaches the interface, and we examine how this difference in solvation behavior is manifested in the electronic structure and chemistry of the two ions. © 2014 American Chemical Society.


Majzoub E.H.,University of Missouri-St. Louis | Ronnebro E.C.E.,Pacific Northwest National Laboratory
Materials Science and Engineering R: Reports | Year: 2012

We present a review of the experimental and theoretical methods used in the discovery of new metal-hydrogen materials systems for hydrogen storage applications. Rather than a comprehensive review of all new materials and methods used in the metal hydride community, we focus on a specific subset of successful methods utilizing theoretical crystal structure prediction methods, computational approaches for screening large numbers of compound classes, and medium-throughput experimental methods for the preparation of such materials. Monte Carlo techniques paired with a simplified empirical Hamiltonian provide crystal structure candidates that are refined using density functional theory. First-principle methods using high-quality structural candidates are further screened for an estimate of reaction energetics, decomposition enthalpies, and determination of reaction pathways. Experimental synthesis utilizes a compacted-pellet sintering technique under high-pressure hydrogen at elevated temperatures. Crystal structure determination follows from a combination of Rietveld refinements of diffraction patterns and first-principles computation of total energies and dynamical stability of competing structures. The methods presented within are general and applicable to a wide class of materials for energy storage. © 2012 Published by Elsevier B.V.


Henderson M.A.,Pacific Northwest National Laboratory
Surface Science | Year: 2012

Nitric oxide chemistry and photochemistry on the Cr-terminated surface of α-Cr 2O 3(0001) were examined using temperature programmed desorption (TPD), sticking coefficient measurements and photodesorption. NO exposed to α-Cr 2O 3(0001) at 100 K binds at surface Cr cation sites forming a strongly bound surface species that thermally desorbs at 320-340 K, depending on coverage. No thermal decomposition was detected in TPD in agreement with previous results in the literature. Sticking probability measurements at 100 K indicated near unity sticking for NO up to coverages of ∼ 1.3 ML, with additional adsorption with higher exposures at decreased sticking probability. These results suggest that some Cr cation sites on the α-Cr 2O 3(0001) surface were capable of binding more than one NO molecule, although it is unclear whether this was as separate NO molecules or as dimers. Photodesorption of adsorbed NO was examined for surface coverages below the 1 ML point. Both visible and UV light were shown to photodesorb NO without detectable NO photodecomposition. Visible light photodesorption of NO occurred with a greater cross section than estimated using UV light. The visible light photodesorption event was not associated with bandgap excitation in α-Cr 2O 3(0001), but instead was linked to excitation of a surface Cr 3 +-NO - charge transfer complex. These results illustrate that localized photoabsorption events at surface sites with unique optical properties (relative to the bulk) can result in unexpected surface photochemistry. © 2011 Elsevier B.V. All rights reserved.


Gephart R.E.,Pacific Northwest National Laboratory
Physics and Chemistry of the Earth | Year: 2010

The world's first full-scale nuclear reactors and chemical reprocessing plants built at the Hanford Site in the desert of southeastern Washington State produced two-thirds of the plutonium generated in the United States for nuclear weapons. Operating these facilities also created large volumes of radioactive and chemical waste, some of which was released into the environment exposing people who lived downwind and downstream. Hanford now contains the largest accumulation of nuclear waste in the Western Hemisphere.Hanford's last reactor shut down in 1987 followed by closure of the last reprocessing plant in 1990. Today, Hanford's only mission is cleanup. Most onsite radioactive waste and nuclear material lingers inside underground tanks or storage facilities. About half of the chemical waste remains in tanks while the rest persists in the soil, groundwater, and burial grounds. Six million dollars each day, or nearly two billion dollars each year, are spent on waste management and cleanup activities. There is significant uncertainty in how long cleanup will take, how much it will cost, and what risks will remain for future generations.This paper summarizes portions of the waste management history of the Hanford Site published in the book " Hanford: A Conversation about Nuclear Waste and Cleanup." (Gephart, 2003). © 2010 Elsevier Ltd.


Baik S.-I.,Northwestern University | Olszta M.J.,Pacific Northwest National Laboratory | Bruemmer S.M.,Pacific Northwest National Laboratory | Seidman D.N.,Northwestern University
Scripta Materialia | Year: 2012

Atom-probe tomography (APT) is utilized to obtain three-dimensional chemical information concerning grain boundary (GB) segregation in a Ni-Cr-Fe alloy 600 with atomic spatial resolution. Detailed crystallography of GBs is determined using an approach that combines electron backscatter diffraction and focused ion-beam microscopy to establish a GB's five macroscopic degrees of freedom, followed by an APT GB composition analysis. Characterizations of GB microstructure and microchemistry are performed to improve our understanding of mechanisms controlling intergranular attack and stress-corrosion cracking. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Jana S.,Pacific Northwest National Laboratory | Hovanski Y.,Pacific Northwest National Laboratory
Science and Technology of Welding and Joining | Year: 2012

A short study has been conducted to assess the performance of friction stir welded Mg/steel lap joints under dynamic loads. The major mode of failure was found to be top Mg sheet fracture. Crack initiation is noted to have taken place at the Mg/steel interface. The fatigue life of the joints is found to be significantly different than the fatigue data of the Mg alloy obtained from the literature. The reasons behind such a difference have been examined in this work. © 2012 Institute of Materials, Minerals and Mining.


Zeng N.,The Interdisciplinary Center | Zhao F.,The Interdisciplinary Center | Collatz G.J.,NASA | Kalnay E.,The Interdisciplinary Center | And 3 more authors.
Nature | Year: 2014

The atmospheric carbon dioxide (CO 2) record displays a prominent seasonal cycle that arises mainly from changes in vegetation growth and the corresponding CO 2 uptake during the boreal spring and summer growing seasons and CO 2 release during the autumn and winter seasons. The CO 2 seasonal amplitude has increased over the past five decades, suggesting an increase in Northern Hemisphere biospheric activity. It has been proposed that vegetation growth may have been stimulated by higher concentrations of CO 2 as well as by warming in recent decades, but such mechanisms have been unable to explain the full range and magnitude of the observed increase in CO 2 seasonal amplitude. Here we suggest that the intensification of agriculture (the Green Revolution, in which much greater crop yield per unit area was achieved by hybridization, irrigation and fertilization) during the past five decades is a driver of changes in the seasonal characteristics of the global carbon cycle. Our analysis of CO 2 data and atmospheric inversions shows a robust 15 per cent long-term increase in CO 2 seasonal amplitude from 1961 to 2010, punctuated by large decadal and interannual variations. Using a terrestrial carbon cycle model that takes into account high-yield cultivars, fertilizer use and irrigation, we find that the long-term increase in CO 2 seasonal amplitude arises from two major regions: the mid-latitude cropland between 25° N and 60° N and the high-latitude natural vegetation between 50° N and 70° N. The long-term trend of seasonal amplitude increase is 0.311 ± 0.027 per cent per year, of which sensitivity experiments attribute 45, 29 and 26 per cent to land-use change, climate variability and change, and increased productivity due to CO 2 fertilization, respectively. Vegetation growth was earlier by one to two weeks, as measured by the mid-point of vegetation carbon uptake, and took up 0.5 petagrams more carbon in July, the height of the growing season, during 2001-2010 than in 1961-1970, suggesting that human land use and management contribute to seasonal changes in the CO 2 exchange between the biosphere and the atmosphere.


Salathe Jr. E.P.,University of Washington | Leung L.R.,Pacific Northwest National Laboratory | Qian Y.,Pacific Northwest National Laboratory | Zhang Y.,U.S. National Center for Atmospheric Research
Climatic Change | Year: 2010

Global climate models do not have sufficient spatial resolution to represent the atmospheric and land surface processes that determine the unique regional climate of the State of Washington. Regional climate models explicitly simulate the interactions between the large-scale weather patterns simulated by a global model and the local terrain. We have performed two 100-year regional climate simulations using the Weather Research and Forecasting (WRF) model developed at the National Center for Atmospheric Research (NCAR). One simulation is forced by the NCAR Community Climate System Model version 3 (CCSM3) and the second is forced by a simulation of the Max Plank Institute, Hamburg, global model (ECHAM5). The mesoscale simulations produce regional changes in snow cover, cloudiness, and circulation patterns associated with interactions between the large-scale climate change and the regional topography and land-water contrasts. These changes substantially alter the temperature and precipitation trends over the region relative to the global model result or statistical downscaling. To illustrate this effect, we analyze the changes from the current climate (1970-1999) to the mid twenty-first century (2030-2059). Changes in seasonal-mean temperature, precipitation, and snowpack are presented. Several climatological indices of extreme daily weather are also presented: precipitation intensity, fraction of precipitation occurring in extreme daily events, heat wave frequency, growing season length, and frequency of warm nights. Despite somewhat different changes in seasonal precipitation and temperature from the two regional simulations, consistent results for changes in snowpack and extreme precipitation are found in both simulations. © 2010 Springer Science+Business Media B.V.


Aypar U.,University of Maryland Baltimore County | Morgan W.F.,Pacific Northwest National Laboratory | Baulch J.E.,University of Maryland Baltimore County
International Journal of Radiation Biology | Year: 2011

Purpose: This review examines the evidence for the hypothesis that epigenetics are involved in the initiation and perpetuation of radiation-induced genomic instability (RIGI). Conclusion: In addition to the extensively studied targeted effects of radiation, it is now apparent that non-targeted delayed effects such as RIGI are also important post-irradiation outcomes. In RIGI, unirradiated progeny cells display phenotypic changes at delayed times after radiation of the parental cell. RIGI is thought to be important in the process of carcinogenesis; however, the mechanism by which this occurs remains to be elucidated. In the genomically unstable clones developed by Morgan and colleagues, radiation-induced mutations, double-strand breaks, or changes in messenger RNA (mRNA) levels alone could not account for the initiation or perpetuation of RIGI. Since changes in the DNA sequence could not fully explain the mechanism of RIGI, inherited epigenetic changes may be involved. Epigenetics are known to play an important role in many cellular processes and epigenetic aberrations can lead to carcinogenesis. Recent studies in the field of radiation biology suggest that the changes in methylation patterns may be involved in RIGI. Together these clues have led us to hypothesise that epigenetics may be the missing link in understanding the mechanism behind RIGI. © 2011 Informa UK, Ltd.


Setyawan W.,Pacific Northwest National Laboratory | Kurtz R.J.,Pacific Northwest National Laboratory
Scripta Materialia | Year: 2012

The cohesion effect of transition metals on W grain boundaries (GBs) was studied via first-principles calculations. Systematic d-band filling for a given atomic environment results in trends of cleavage energy as a function of the solute's electronic valence across different GB sites. Various solutes were found to strengthen the GB. Furthermore, lower- and higher-valence elements enhance cohesion at different positions. This suggests that employing both lower-valence (Hf, Ta, Nb) and higher-valence solutes (Re, Os, Ru) may further increase intergranular cohesion. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.


Petrik N.G.,Pacific Northwest National Laboratory | Kimmel G.A.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

The ultraviolet (UV) photon-stimulated reactions in oxygen adsorbed on reduced TiO2(110) at low temperatures (<100 K) are studied. When a single O2 is chemisorbed in each bridging oxygen vacancy, only ∼14% of the O2 desorbs after prolonged UV irradiation. For the remaining O2 on the surface after irradiation, about one-half dissociates, and the other one-half is left in a nondissociated state that is inactive for hole-mediated photodesorption. For the maximum coverage of chemisorbed oxygen, the fraction of O2 that photodesorbs increases substantially, but is still only ∼40%. However, when physisorbed oxygen is also present, ∼70% of the initially chemisorbed O2 photodesorbs. On the basis of the experimental results, we propose that both hole- and electron-mediated reactions with O2 chemisorbed on TiO 2(110) are important. Hole-mediated reactions lead to O2 photodesorption, while electron-mediated reactions lead to O2 dissociation. The electron-mediated reactions explain the low total photodesorption yield when no physisorbed O2 is present. For a fixed amount of chemisorbed 18O2, its PSD yield increases substantially if 16O2 is subsequently chemisorbed, indicating that the hole-mediated O2 photodesorption probability depends on the charge state of the chemisorbed O2; it decreases for more negatively charged O2. Because the charge state of the chemisorbed O2 depends on the total oxygen coverage, the coverage influences the photodesorption process. A simple model based on the oxygen coverage and the charge of the chemisorbed oxygen, which accounts for the observations, is presented. In the model, O2 chemisorbs as either O2 - or O2 2- depending on the oxygen coverage. O2 - (O2 2-) reacting with a hole leads to O2 0 desorption with a high (low) probability. O2 2- plus an electron typically leads to O2 dissociation, while O2 - + e- does not lead to dissociation. © 2010 American Chemical Society.


Shen M.,Pacific Northwest National Laboratory | Henderson M.A.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

The effects of water and methanol ice overlayers on the photodecomposition of acetone on rutile TiO2(110) were evaluated in ultrahigh vacuum (UHV) using photon-stimulated desorption (PSD) and temperature-programmed desorption (TPD). In the absence of ice overlayers, acetone photodecomposed on TiO2(110) at 95 K by ejection of a methyl radical into the gas phase and formation of acetate on the surface. With ice overlayers, the methyl radicals are trapped at the interface between TiO2(110) and the ice. When water ice was present, the ejected methyl radicals reacted either with each other to form ethane or with other molecules in the ice (e.g., water or displaced acetone) to form methane (CH4), ethane (CH 3CH3), and other products (e.g., methanol) with all of these products trapped in the ice. The new products were free to revisit the surface or to depart during desorption of the ice. Using isotopic labeling, we show that a significant portion (50%) of methane formed resulted from reactions of methyl radicals with water in the ice. Because the methane formation from reaction of methyl radical and water is highly endothermic, the ejected methyl radicals must be emitted hyperthermally with the reaction occurring during the initial collision of the radical with a neighboring water molecule. Formation of ethane (and other products) likely comes as a consequence of unfavorable methyl radical and water collisions in the ice. Similar results were obtained using methanol ice (instead of water) except that methane and ethane products slowly leaked through the methanol ice overlayers into vacuum at 95 K but not through the water ice overlayers. These results provide new insights into the product formation routes and solution-phase radical formation mechanisms that are important in heterogeneous photocatalysis. © 2011 American Chemical Society.


Deskins N.A.,Worcester Polytechnic Institute | Rousseau R.,Pacific Northwest National Laboratory | Dupuis M.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

We describe a DFT + U study of the (110) rutile surface with oxygen vacancies (Ov's). Oxygen vacancies leave behind two excess unpaired electrons per Ov, leading formally to the formation of two Ti 3+ ions. We investigate the location of the Ti3+ ions within the first three surface layers. In total, we obtained 49 unique solutions of possible Ti3+ pairs, to examine the stability of all Ti types (e.g., five-coordinated surface Ti, six-coordinated surface Ti, subsurface sites, etc.). Our results show that subsurface sites are preferred but that many configurations are close in energy, within up to 0.3-0.4 eV of each other. In contrast to findings in previous work, we show that sites directly adjacent to the Ov's are unstable. Analysis of our results shows that the two Ti3+ ions within a pair behave independently of each other, as there are little electronic interactions between the excess electrons associated with these sites. We also examined the migration of Ti3+ sites from the surface into the bulk and find the surface locations to be preferred by ∼0.5 eV relative to the bulk. Our systematic results provide a comprehensive picture of excess electrons that indicates that they are not trapped or localized at specific sites but are distributed across several sites due to nearly degenerate Ti3+ states. © 2011 American Chemical Society.


Wang Y.,Tsinghua University | Wang Y.,Pacific Northwest National Laboratory | Li Z.,Pacific Northwest National Laboratory | Wang J.,Pacific Northwest National Laboratory | And 2 more authors.
Trends in Biotechnology | Year: 2011

Graphene is the basic building block of 0D fullerene, 1D carbon nanotubes, and 3D graphite. Graphene has a unique planar structure, as well as novel electronic properties, which have attracted great interests from scientists. This review selectively analyzes current advances in the field of graphene bioapplications. In particular, the biofunctionalization of graphene for biological applications, fluorescence-resonance-energy-transfer-based biosensor development by using graphene or graphene-based nanomaterials, and the investigation of graphene or graphene-based nanomaterials for living cell studies are summarized in more detail. Future perspectives and possible challenges in this rapidly developing area are also discussed. © 2011 Elsevier Ltd.


Yang Z.,Pacific Northwest National Laboratory | Wang T.,Pacific Northwest National Laboratory
Continental Shelf Research | Year: 2015

Intertidal flats in estuaries are coastal wetlands that provide critical marine habitats to support wide ranges of marine species. Over the last century many estuarine systems have experienced significant loss of intertidal flats due to anthropogenic impacts. This paper presents a modeling study conducted to investigate the responses of estuarine hydrodynamics to the loss of intertidal flats in Whidbey Basin of Puget Sound on the northwest coast of North America. Changes in salinity intrusion limits in the estuaries, salinity stratification, and circulation in intertidal flats and estuaries were evaluated by comparing model results under the existing baseline condition and the no-flat condition. Model results showed that loss of intertidal flats results in an increase in salinity intrusion, stronger mixing, and a phase shift in salinity and velocity fields in the bay front areas. Model results also indicated that loss of intertidal flats enhances two-layer circulation, especially the bottom water intrusion. Loss of intertidal flats increases the mean salinity but reduces the salinity range in the subtidal flats over a tidal cycle because of increased mixing. Salinity intrusion limits extend upstream in all three major rivers discharging into Whidbey Basin when no intertidal flats are present. Changes in salinity intrusion and estuarine circulation patterns due to loss of intertidal flats affect the nearshore habitat and water quality in estuaries and potentially increase risk of coastal hazards, such as storm surge and coastal flooding. Lastly, model results suggested the importance of including intertidal flats and the wetting-and-drying process in hydrodynamic simulations when intertidal flats are present in the model domain. © 2015 Elsevier Ltd.


Yoon H.,Oregon Health And Science University | Ansong C.,Pacific Northwest National Laboratory | Adkins J.N.,Pacific Northwest National Laboratory | Heffron F.,Oregon Health And Science University
Infection and Immunity | Year: 2011

Salmonella enterica serovar Typhimurium, an intracellular pathogen and leading cause of food-borne illness, encodes a plethora of virulence effectors. Salmonella virulence factors are translocated into host cells and manipulate host cellular activities, providing a more hospitable environment for bacterial proliferation. In this study, we report a new set of virulence factors that is translocated into the host cytoplasm via bacterial outer membrane vesicles (OMV). PagK (or PagK1), PagJ, and STM2585A (or PagK2) are small proteins composed of ~70 amino acids and have high sequence homology to each other (>85% identity). Salmonella lacking all three homologues was attenuated for virulence in a mouse infection model, suggesting at least partial functional redundancy among the homologues. While each homologue was translocated into the macrophage cytoplasm, their translocation was independent of all three Salmonella gene-encoded type III secretion systems (T3SSs)-Salmonella pathogenicity island 1 (SPI-1) T3SS, SPI-2 T3SS, and the flagellar system. Selected methods, including direct microscopy, demonstrated that the PagK-homologous proteins were secreted through OMV, which were enriched with lipopolysaccharide (LPS) and outer membrane proteins. Vesicles produced by intracellular bacteria also contained lysosome-associated membrane protein 1 (LAMP1), suggesting the possibility of OMV convergence with host cellular components during intracellular trafficking. This study identified novel Salmonella virulence factors secreted via OMV and demonstrated that OMV can function as a vehicle to transfer virulence determinants to the cytoplasm of the infected host cell. © 2011, American Society for Microbiology.


Xu W.,Pacific Northwest National Laboratory | Wang J.,Pacific Northwest National Laboratory | Wang J.,Shanghai JiaoTong University | Ding F.,Tianjin Institute of Power Sources | And 5 more authors.
Energy and Environmental Science | Year: 2014

Lithium (Li) metal is an ideal anode material for rechargeable batteries due to its extremely high theoretical specific capacity (3860 mA h g -1), low density (0.59 g cm-3) and the lowest negative electrochemical potential (-3.040 V vs. the standard hydrogen electrode). Unfortunately, uncontrollable dendritic Li growth and limited Coulombic efficiency during Li deposition/stripping inherent in these batteries have prevented their practical applications over the past 40 years. With the emergence of post-Li-ion batteries, safe and efficient operation of Li metal anodes has become an enabling technology which may determine the fate of several promising candidates for the next generation energy storage systems, including rechargeable Li-air batteries, Li-S batteries, and Li metal batteries which utilize intercalation compounds as cathodes. In this paper, various factors that affect the morphology and Coulombic efficiency of Li metal anodes have been analyzed. Technologies utilized to characterize the morphology of Li deposition and the results obtained by modelling of Li dendrite growth have also been reviewed. Finally, recent development and urgent need in this field are discussed. © The Royal Society of Chemistry.


Nune S.K.,Pacific Northwest National Laboratory | Thallapally P.K.,Pacific Northwest National Laboratory | McGrail B.P.,Pacific Northwest National Laboratory
Journal of Materials Chemistry | Year: 2010

MOGs with excellent thermal stability and porosity have been synthesized at room temperature. The strength of the MOGs obtained depend on the raw materials used, reaction time, temperature, concentration of reactants and the processing conditions. MOGs with higher internal surface areas were obtained using near supercritical processing conditions. Measurement of CO2 sorption isotherm of MOG-1a at high pressure (30 bar) suggests 33 wt% (7.5 mmol g -1) of CO2 with a reversible uptake and release. To our knowledge this is the first study on the utilization of the MOGs for CO 2 capture applications. Significant uptake of CO2 at high pressure (∼30 bar) clearly reveals the significant potential of these materials for their applications as solid sorbents. © 2010 The Royal Society of Chemistry.


Pitcher H.M.,Pacific Northwest National Laboratory
Environmental Economics and Policy Studies | Year: 2014

A sustainable world has often been assumed to be a world in which climate change will be dealt with automatically. But even in a future which emphasizes sustainability, there will still be limited resources and the need to balance competing demands on these resources. As the list of objectives for a sustainable world is quite long, climate objectives will need to compete with many other policy goals for resources. So it is of relevance to understand what the climatic outcomes of a sustainable world might be, what additional options to control emissions might be in place in this world, and how the costs might vary as additional policies are implemented, or assumptions about critical determinants of emissions vary. In this analysis, we focus on how a generic carbon tax varies as alternative assumptions about energy demand, the effectiveness of carbon sequestration, and climate goals are made. The result that emerges is that uncertainty is pervasive in the climate mitigation problem, and that despite the political difficulties of revisiting decisions on a regular basis, there is no way out of the need for adaptive policies which require constant reassessment of the goals and the effectiveness of the policies in place to reach these goals. © 2000, Springer Japan.


Shao Y.,Pacific Northwest National Laboratory | Wang J.,Pacific Northwest National Laboratory | Engelhard M.,Pacific Northwest National Laboratory | Wang C.,Pacific Northwest National Laboratory | Lin Y.,Pacific Northwest National Laboratory
Journal of Materials Chemistry | Year: 2010

Graphene oxide is electrochemically reduced which is called electrochemically reduced graphene oxide (ER-G). ER-G is characterized with scanning electron microscopy, transmission electron microscopy, X-ray photoelectron spectroscopy, and X-ray diffraction. The oxygen content is significantly decreased and the sp2 carbon is restored after electrochemical reduction. ER-G exhibits much higher electrochemical capacitance and cycling durability than carbon nanotubes (CNTs) and chemically reduced graphene; the specific capacitance measured with cyclic voltammetry (20 mV s-1) is ∼165, ∼86, and ∼100 F g-1 for ER-G, CNTs, and chemically reduced graphene, respectively. The electrochemical reduction of oxygen and hydrogen peroxide are greatly enhanced on ER-G electrodes as compared with CNTs. ER-G has shown promising features for applications in energy storage, biosensors, and electrocatalysis. © 2010 The Royal Society of Chemistry.


Ronnebro E.,Pacific Northwest National Laboratory
Current Opinion in Solid State and Materials Science | Year: 2011

The group II alkaline-earth metal borohydrides, Mg(BH4) 2 and Ca(BH4)2 are among the most promising materials for light-weight, high-capacity hydrogen storage. Five years ago, little were known about the potential of these materials for reversible hydrogen storage, except for their high hydrogen content of 14.9 wt% and 11.6 wt% respectively. Theory predicted nearly ideal thermodynamics, but finding competing decomposition pathways with formation of very stable phases which limits cycle life. Solid-state synthesis routes have been developed and crystal structures and decomposition products have been identified as well as methods to improve hydrogen sorption performance including catalysis and nanoscience. Reversibility was demonstrated for both materials at high pressures and temperatures. We will here review recent progress and discuss challenges and future pathways towards applications. © 2010 Elsevier Ltd. All rights reserved.


Bowden M.,Pacific Northwest National Laboratory | Autrey T.,Pacific Northwest National Laboratory
Current Opinion in Solid State and Materials Science | Year: 2011

In this review we cover the recent developments providing insight into the chemical and physical properties for a series of hydrogen-rich nitrogen-boron-hydrogen materials that are of interest as energy storage media for fuel cell power applications. These materials, ammonium borohydride (ABH 2, [NH 4][BH 4]; 240 g H 2/kg; 165 g/l), ammonia borane (AB, [NH 3BH 3]; 196 g H 2/kg; 147 g/l) and diammoniate of diborane (DADB, [NH 3BH 2NH 3][BH 4]; 196 g H 2/kg; ca. 151 g/l), release hydrogen by a series of moderately exothermic reaction pathways. The advantage of these materials is that hydrogen release is kinetically controlled and occurs at relatively low temperatures and moderate pressures. The challenges are devising economical pathways to regenerate the fully charged hydrogen storage materials off board and understanding and controlling the formation of volatile impurities that decrease the purity of the hydrogen available for polymer electrolyte membrane fuel cell applications. The focus of this review is on the solid phase ABH 2, AB and DADB materials to complement the coverage of AB in other recent review articles (Stephens et al. [1]; Marder [2]). Additional discussion is given on the decomposition products of these materials, polyaminoborane (PAB, [NH 2BH 2] n) and polyiminoborane (PIB, [NHBH] n). The article is organized into three sections: (i) Synthesis and structural characterization; (ii) Kinetics and thermodynamics of hydrogen release and (iii) Outstanding challenges for breakthroughs. © 2010 Elsevier Ltd. All rights reserved.


Matthiesen J.,Pacific Northwest National Laboratory | Smith R.S.,Pacific Northwest National Laboratory | Kay B.D.,Pacific Northwest National Laboratory
Journal of Physical Chemistry Letters | Year: 2011

How do liquid mixtures cooled to temperatures below their freezing point behave? We address this question using nanoscale films of methanol and ethanol supercooled liquid solutions of varying composition (0-100% methanol) at temperatures near their glass transition, T g. The permeation of Kr through these films is used to determine the diffusivities of the supercooled liquid mixtures. We find that the temperature-dependent diffusivities of the mixtures are well-fit by a Vogel-Fulcher-Tamman equation, indicating that the mixtures exhibit fragile behavior at temperatures just above their T g. Further, for a given temperature, the composition-dependent diffusivity is well-fit by a Vignes-type equation, that is, the diffusivity of any mixture can be predicted using an exponential weighting of the pure methanol and ethanol diffusivities. These results show that deeply supercooled liquid mixtures can be used to provide valuable insight into the properties of normal liquid mixtures. © 2011 American Chemical Society.


Baer M.D.,Pacific Northwest National Laboratory | Mundy C.J.,Pacific Northwest National Laboratory
Journal of Physical Chemistry Letters | Year: 2011

Although it is now accepted that some anions adsorb at the air-water interface following a reverse Hofmeister series [Kunz, W.; Henle, J.; Ninham, B. W.Curr. Opinion Colloid Interface Sci. 2004, 9, 19 -37; Tobias, D. J.; Hemminger, J. C.Science 2008, 319, 1197-1198], the nature of the microscopic interactions driving ions to interfaces is currently the subject of active research. We use extensive density functional theory (DFT)-based interaction potentials to study the free energy of transfer of iodide from the interior to the surface in both a cluster and the extended air-water interface. Our research supports a picture that empirical polarizable interaction potentials may overestimate surface adsorption for iodide. These results, in conjunction with previous theoretical and experimental studies on iodide solvation, have implications toward the necessary interactions that give rise to the specific ion effect at the air-water interface. © 2011 American Chemical Society.


Petrik N.G.,Pacific Northwest National Laboratory | Kimmel G.A.,Pacific Northwest National Laboratory
Journal of Physical Chemistry Letters | Year: 2011

The photochemistry of oxygen adsorbed on TiO 2(110) at 30 K and annealed up to 600 K is investigated. UV irradiation results in exchange of atoms between chemisorbed and physisorbed oxygen. Annealing chemisorbed oxygen to 350 K maximizes these exchange reactions, while such reactions are not observed for oxygen that is dissociatively adsorbed on TiO 2(110) at 300 K. For oxygen annealed to 350 K, the exchange products photodesorb in the plane perpendicular to the bridge-bonded oxygen rows at an angle of 45̊. In contrast, chemisorbed O 2 photodesorbs normal to the surface. Remarkably, the chemisorbed species is stable under multiple cycles of UV irradiation. Atoms in the chemisorbed species can be changed from 18O to 16O and then back to 18O via the exchange reactions. The results show that annealing oxygen on TiO 2(110) to ∼350 K produces a stable chemical species with novel photochemical properties. Possible forms for the photoactive species include O 2 adsorbed in a bridging oxygen vacancy or tetraoxygen. © 2011 American Chemical Society.


Shen M.,Pacific Northwest National Laboratory | Henderson M.A.,Pacific Northwest National Laboratory
Journal of Physical Chemistry Letters | Year: 2011

Molecular and dissociative forms of adsorbed methanol were prepared on the rutile TiO 2(110) surface to study their relative photocatalytic activity for hole-mediated oxidation. Molecular methanol is the dominant surface species on the vacuum-annealed TiO 2(110) surface in ultrahigh vacuum (UHV). Coadsorption of methanol with oxygen results in ∼20% of the adsorbed methanol decomposing to methoxy and OH. Subsequent heating of the surface to ∼350 K desorbs unreacted methanol and OH (as water), leaving a surface with only adsorbed methoxy groups. Using temperature-programmed desorption (TPD), we show that adsorbed methoxy is at least an order of magnitude more reactive than molecularly adsorbed methanol for hole-mediated photooxidation. Methoxy photodecomposes through cleavage of a C-H bond forming adsorbed formaldehyde and a surface OH group. These results suggest that methoxy, and not molecular methanol, is the effective hole scavenger in photochemical reactions of methanol on TiO 2. © 2011 American Chemical Society.


Guo J.,Pacific Northwest National Laboratory | Gaffrey M.J.,Pacific Northwest National Laboratory | Su D.,Pacific Northwest National Laboratory | Su D.,Genentech | And 4 more authors.
Nature Protocols | Year: 2014

Reversible modifications of cysteine thiols have a key role in redox signaling and regulation. A number of reversible redox modifications, including disulfide formation, S-nitrosylation (SNO) and S-glutathionylation (SSG), have been recognized for their significance in various physiological and pathological processes. Here we describe a procedure for the enrichment of peptides containing reversible cysteine modifications. Starting with tissue or cell lysate samples, all of the unmodified free thiols are blocked using N-ethylmaleimide (NEM). This is followed by the selective reduction of those cysteines bearing the reversible modification(s) of interest. The reduction is achieved by using different reducing reagents that react specifically with each type of cysteine modification (e.g., ascorbate for SNO). This protocol serves as a general approach for enrichment of thiol-containing proteins or peptides derived from reversibly modified proteins. The approach uses a commercially available thiol-Affinity resin (thiopropyl Sepharose 6B) to directly capture free thiol-containing proteins through a disulfide exchange reaction, followed by on-resin protein digestion and multiplexed isobaric labeling to facilitate liquid chromatography-tandem mass spectrometry (LC-MS/MS)-based quantitative site-specific analysis of cysteine-based reversible modifications. The overall approach requires a simpler workflow with increased specificity compared with the commonly used biotinylation-based assays. The procedure for selective enrichment and analyses of SNO and the level of total reversible cysteine modifications (or total oxidation) is presented to demonstrate the utility of this general strategy. The entire protocol requires ∼3 d for sample processing with an additional day for LC-MS/MS and data analysis. © 2013 Nature America, Inc. All rights reserved.


Du D.,Central China Normal University | Liu J.,Central China Normal University | Zhang X.,Fudan University | Cui X.,Fudan University | Lin Y.,Pacific Northwest National Laboratory
Journal of Materials Chemistry | Year: 2011

This paper described the preparation, characterization, and electrochemical properties of a graphene-ZrO2 nanocomposite (GZN) and its application for both the enrichment and detection of methyl parathion (MP). GZN was fabricated using electrochemical deposition and characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS), which showed the successful formation of nanocomposites. Due to the strong affinity to the phosphoric group and the fast electron-transfer kinetics of GZN, both the extraction and electrochemical detection of organophosphorus (OP) agents at the same GZN modified electrochemical sensor was possible. The combination of solid-phase extraction and stripping voltammetric analysis allowed fast, sensitive, and selective determination of MP in garlic samples. The stripping response was highly linear over the MP concentrations ranging from 0.5 ng mL-1 to 100 ng mL-1, with a detection limit of 0.1 ng mL-1. This new nanocomposite-based electrochemical sensor provides an opportunity to develop a field-deployable, sensitive, and quantitative method for monitoring exposure to OPs. © 2011 The Royal Society of Chemistry.


Fu N.,Pacific Northwest National Laboratory | Xiong Y.,Pacific Northwest National Laboratory | Squier T.C.,Pacific Northwest National Laboratory
Bioconjugate Chemistry | Year: 2013

To optimize cellular delivery and specific labeling of tagged cytosolic proteins by biarsenical fluorescent probes built around a cyanine dye (Cy3) scaffold, we have systematically varied the polarity of the N-alkyl chain (i.e., 4-5 methylene groups appended by a sulfonate or methoxy ester moiety) and arsenic capping reagent (ethanedithiol versus benzenedithiol). Optimal live-cell labeling and visualization of tagged cytosolic proteins is reported using an ethanedithiol capping reagent with the uncharged methoxy ester functionalized N-alkyl chains. These measurements demonstrate the general utility of this new class of photostable and highly fluorescent biarsenical probes based on the cyanine dye scaffold for in vivo labeling of tagged cellular proteins for live cell imaging measurements of protein dynamics. © 2013 American Chemical Society.


Bond L.J.,Pacific Northwest National Laboratory
Journal of Pressure Vessel Technology, Transactions of the ASME | Year: 2012

There is growing interest in life extensions to enable longer term operation (LTO) for both existing nuclear power plants (NPPs) and proposed new NPPs. In order to justify an initial license extension for the 40-60 yr period, new nondestructive examination (NDE) approaches have been developed and deployed by NPP operators in their aging management programs. However, to achieve the goals of even longer term operation, and specifically for the United States in looking at methodologies to support subsequent license renewal periods (i.e., 60-80 yr and beyond), it is necessary to understand the capabilities of current NDE methods to detect, monitor, and trend degradation and hence enable timely implementation of appropriate mitigation and corrective actions. This paper discusses insights from past experience, the state-of-the-art, and current activities in the move toward providing a capacity for proactive management of materials degradation to support NPP LTO.


Ryan E.M.,Boston University | Sanquist T.F.,Pacific Northwest National Laboratory
Energy and Buildings | Year: 2012

Building energy models provide valuable insight into energy use in commercial and residential buildings based on architecture, materials and thermal loads. They are used in the design of new buildings and retrofitting to increase the efficiency of older buildings. The accuracy of these models is crucial to reducing energy use in the US and building a sustainable energy future. In addition to the architecture and thermal loads, building energy models also must account for the effects of the building's occupants on energy use. Traditionally simple schedule based methods have been used to account for the effects of occupants. However, newer research has shown that these methods often result in large differences between the modeled and actual energy use of buildings. In this paper we discuss building energy models and their accuracy in predicting energy use. In particular we focus on the different types of validation methods which have been used to investigate the accuracy of building energy models and how they account for (or do not) the effects of occupants. We also review newer work on stochastic methods for estimating the effects of occupants on energy use and discuss the improvements necessary to increase the accuracy of building energy models. © 2012 Elsevier B.V.


Wang M.,University of Michigan | Wang M.,Pacific Northwest National Laboratory | Penner J.E.,University of Michigan
Atmospheric Chemistry and Physics | Year: 2010

A statistical cirrus cloud scheme that accounts for mesoscale temperature perturbations is implemented in a coupled aerosol and atmospheric circulation model to better represent both subgrid-scale supersaturation and cloud formation. This new scheme treats the effects of aerosol on cloud formation and ice freezing in an improved manner, and both homogeneous freezing and heterogeneous freezing are included. The scheme is able to better simulate the observed probability distribution of relative humidity compared to the scheme that was implemented in an older version of the model. Heterogeneous ice nuclei (IN) are shown to decrease the frequency of occurrence of supersaturation, and improve the comparison with observations at 192 hPa. Homogeneous freezing alone can not reproduce observed ice crystal number concentrations at low temperatures (<205 K), but the addition of heterogeneous IN improves the comparison somewhat. Increases in heterogeneous IN affect both high level cirrus clouds and low level liquid clouds. Increases in cirrus clouds lead to a more cloudy and moist lower troposphere with less precipitation, effects which we associate with the decreased convective activity. The change in the net cloud forcing is not very sensitive to the change in ice crystal concentrations, but the change in the net radiative flux at the top of the atmosphere is still large because of changes in water vapor. Changes in the magnitude of the assumed mesoscale temperature perturbations by 25% alter the ice crystal number concentrations and the net radiative fluxes by an amount that is comparable to that from a factor of 10 change in the heterogeneous IN number concentrations. Further improvements on the representation of mesoscale temperature perturbations, heterogeneous IN and the competition between homogeneous freezing and heterogeneous freezing are needed. © 2010 Author(s).


Qian Y.,Pacific Northwest National Laboratory | Gustafson Jr. W.I.,Pacific Northwest National Laboratory | Fast J.D.,Pacific Northwest National Laboratory
Atmospheric Chemistry and Physics | Year: 2010

One fundamental property and limitation of grid based models is their inability to identify spatial details smaller than the grid cell size. While decades of work have gone into developing sub-grid treatments for clouds and land surface processes in climate models, the quantitative understanding of sub-grid processes and variability for aerosols and their precursors is much poorer. In this study, WRF-Chem is used to simulate the trace gases and aerosols over central Mexico during the 2006 MILAGRO field campaign, with multiple spatial resolutions and emission/terrain scenarios. Our analysis focuses on quantifying the sub-grid variability (SGV) of trace gases and aerosols within a typical global climate model grid cell, i.e. 75×75 km2.

Our results suggest that a simulation with 3-km horizontal grid spacing adequately reproduces the overall transport and mixing of trace gases and aerosols downwind of Mexico City, while 75-km horizontal grid spacing is insufficient to represent local emission and terrain-induced flows along the mountain ridge, subsequently affecting the transport and mixing of plumes from nearby sources. Therefore, the coarse model grid cell average may not correctly represent aerosol properties measured over polluted areas. Probability density functions (PDFs) for trace gases and aerosols show that secondary trace gases and aerosols, such as O3, sulfate, ammonium, and nitrate, are more likely to have a relatively uniform probability distribution (i.e. smaller SGV) over a narrow range of concentration values. Mostly inert and long-lived trace gases and aerosols, such as CO and BC, are more likely to have broad and skewed distributions (i.e. larger SGV) over polluted regions. Over remote areas, all trace gases and aerosols are more uniformly distributed compared to polluted areas. Both CO and O3 SGV vertical profiles are nearly constant within the PBL during daytime, indicating that trace gases are very efficiently transported and mixed vertically by turbulence. But, simulated horizontal variability indicates that trace gases and aerosols are not well mixed horizontally in the PBL. During nighttime the SGV for trace gases is maximum at the surface, and quickly decreases with height. Unlike the trace gases, the SGV of BC and secondary aerosols reaches a maximum at the PBL top during the day. The SGV decreases with distance away from the polluted urban area, has a more rapid decrease for long-lived trace gases and aerosols than for secondary ones, and is greater during daytime than nighttime.

The SGV of trace gases and aerosols is generally larger than for meteorological quantities. Emissions can account for up to 50% of the SGV over urban areas such as Mexico City during daytime for less-reactive trace gases and aerosols, such as CO and BC. The impact of emission spatial variability on SGV decays with altitude in the PBL and is insignificant in the free troposphere. The emission variability affects SGV more significantly during daytime (rather than nighttime) and over urban (rather than rural or remote) areas. The terrain, through its impact on meteorological fields such as wind and the PBL structure, affects dispersion and transport of trace gases and aerosols and their SGV. © Author(s) 2010.


Skomurski F.N.,Pacific Northwest National Laboratory | Kerisit S.,Pacific Northwest National Laboratory | Rosso K.M.,Pacific Northwest National Laboratory
Geochimica et Cosmochimica Acta | Year: 2010

For the purpose of improving fundamental understanding of the redox reactivity of magnetite, quantum-mechanical calculations were applied to predict Fe2+ availability and electron hopping rates at magnetite (100) surfaces, with and without the presence of adsorbed water. Using a low free energy surface reconstruction (1/2-Fetet layer relaxed into the Feoct (100) plane), the relaxed outermost layer of both the hydrated and vacuum-terminated surfaces were found to be predominantly enriched in Fe2+ within the octahedral sublattice, irrespective of the presence of adsorbed water. At room temperature, mobile electrons move through the octahedral sublattice by Fe2+-Fe3+ valence interchange small polaron hopping, calculated at 1010-1012 hops/s for bulk and bulk-like (i.e., near-surface) environments. This process is envisioned to control sustainable overall rates of interfacial redox reactions. These rates decrease by up to three orders of magnitude (109 hops/s) at the (100) surface, and no significant difference is observed for vacuum-terminated versus hydrated cases. Slower hopping rates at the surface appear to arise primarily from larger reorganization energies associated with octahedral Fe2+-Fe3+ valence interchange in relaxed surface configurations, and secondarily on local charge distribution patterns surrounding Fe2+-Fe3+ valence interchange pairs. These results suggest that, with respect to the possibility that the rate and extent of surface redox reactions depend on Fe2+ availability and its replenishment rate, bulk electron hopping mobility is an upper-limit for magnetite and slower surface rates may need to be considered as potentially rate-limiting. They also suggest that slower hopping mobilities calculated for surface environments may be amenable to Fe2+-Fe3+ site discrimination by conventional spectroscopic probes. © 2010 Elsevier Ltd.


Sheen D.,Pacific Northwest National Laboratory | McMakin D.,Pacific Northwest National Laboratory | Hall T.,Pacific Northwest National Laboratory
Applied Optics | Year: 2010

Three-dimensional radio frequency imaging techniques have been developed for a variety of near-field applications, including radar cross-section imaging, concealed weapon detection, ground penetrating radar imaging, through-barrier imaging, and nondestructive evaluation. These methods employ active radar transceivers that operate at various frequency ranges covering a wide range, from less than 100 MHz to in excess of 350 GHz, with the frequency range customized for each application. Computational wavefront reconstruction imaging techniques have been developed that optimize the resolution and illumination quality of the images. In this paper, rectilinear and cylindrical three-dimensional imaging techniques are described along with several application results. © 2010 Optical Society of America.


Riihimaki L.D.,Pacific Northwest National Laboratory | McFarlane S.A.,Pacific Northwest National Laboratory
Journal of Geophysical Research: Atmospheres | Year: 2010

Tropical tropopause layer cirrus (TTLC) profiles identified from CALIPSO lidar measurements are grouped into cloud objects and classified according to whether or not they are directly connected to deep convection. TTLC objects directly connected to deep convection are optically and physically thicker than isolated objects, consistent with what would be expected if convective objects were formed from convective detrainment and isolated objects formed in situ. The frequency of occurrence of these different classifications of TTLC varies depending on height and region. Thirty-six percent of TTLC profiles contain convective TTLC when TTLC is defined with cloud base height greater than 14 km (±20° Latitude). This estimate of convective TTLC is thought to be a lower limit on the percentage of TTLC formed by convective detrainment. Aged anvil that has persisted longer than the thick regions of the convective cloud would not be classified as convective in our classification. Regions with higher occurrence of deep convection also have higher occurrence of TTLC, and a greater percentage of those TTLC are classified as convective. Definitions of TTLC using higher cloud base height thresholds contain a smaller percentage of TTLC that are classified as convective. Cloud top heights of both isolated and convective TTLC are distributed similarly with respect to the height of the cold point tropopause. Cloud optical depths were found to be higher in TTLC over thick clouds than in TTLC over clear sky during the day but not in night measurements. It is not clear whether this has a physical cause or is due to daytime measurement uncertainties. Both cloud base and top heights are higher over deep convection than over clear sky. © 2010 by the American Geophysical Union.


Spichtinger P.,ETH Zurich | Cziczo D.J.,ETH Zurich | Cziczo D.J.,Pacific Northwest National Laboratory
Journal of Geophysical Research: Atmospheres | Year: 2010

The influence of initial heterogeneous nucleation on subsequent homogeneous nucleation events in cirrus clouds is investigated using a box model which includes the explicit impact of aerosols on the nucleation of ice crystals and a new sedimentation scheme. Different effects are discussed, namely the impact of external versus internal mixtures of heterogeneous ice nuclei and the influence of size-dependent freezing thresholds. Several idealized experiments are carried out, which show that the treatment of external mixtures of ice nuclei can strongly change subsequent homogeneous nucleation events (i.e., the ice crystal number densities) over a large variety of environmental conditions relevant for the upper troposphere. In most cases a strong reduction in ice crystal number concentrations formed in subsequent homogeneous freezing events is observed. The use of size-dependent freezing thresholds can change cloud properties when compared to more simple parameterizations and is most important at largest ice nuclei concentrations. Copyright 2010 by the American Geophysical Union.


Jaisi D.P.,Yale University | Blake R.E.,Yale University | Kukkadapu R.K.,Pacific Northwest National Laboratory
Geochimica et Cosmochimica Acta | Year: 2010

Iron (III) oxides are ubiquitous in near-surface soils and sediments and interact strongly with dissolved phosphates via sorption, co-precipitation, mineral transformation and redox-cycling reactions. Iron oxide phases are thus, an important reservoir for dissolved phosphate, and phosphate bound to iron oxides may reflect dissolved phosphate sources as well as carry a history of the biogeochemical cycling of phosphorus (P). It has recently been demonstrated that dissolved inorganic phosphate (DIP) in rivers, lakes, estuaries and the open ocean can be used to distinguish different P sources and biological reaction pathways in the ratio of 18O/ 16O (δ 18O P) in PO 4 3-. Here we present results of experimental studies aimed at determining whether non-biological interactions between dissolved inorganic phosphate and solid iron oxides involve fractionation of oxygen isotopes in PO 4. Determination of such fractionations is critical to any interpretation of δ 18O P values of modern (e.g., hydrothermal iron oxide deposits, marine sediments, soils, groundwater systems) to ancient and extraterrestrial samples (e.g., BIF's, Martian soils). Batch sorption experiments were performed using varied concentrations of synthetic ferrihydrite and isotopically-labeled dissolved ortho-phosphate at temperatures ranging from 4 to 95 °C. Mineral transformations and morphological changes were determined by X-Ray, Mössbauer spectroscopy and SEM image analyses. Our results show that isotopic fractionation between sorbed and aqueous phosphate occurs during the early phase of sorption with isotopically-light phosphate (P 16O 4) preferentially incorporated into sorbed/solid phases. This fractionation showed negligible temperature-dependence and gradually decreased as a result of O-isotope exchange between sorbed and aqueous-phase phosphate, to become insignificant at greater than ∼100 h of reaction. In high-temperature experiments, this exchange was very rapid resulting in negligible fractionation between sorbed and aqueous-phase phosphate at much shorter reaction times. Mineral transformation resulted in initial preferential desorption/loss of light phosphate (P 16O 4) to solution. However, the continual exchange between sorbed and aqueous PO 4, concomitant with this mineralogical transformation resulted again in negligible fractionation between aqueous and sorbed PO 4 at long reaction times (>2000 h). This finding is consistent with results obtained from natural marine samples. Therefore, 18O values of dissolved phosphate (DIP) in sea water may be preserved during its sorption to iron-oxide minerals such as hydrothermal plume particles, making marine iron oxides a potential new proxy for dissolved phosphate in the oceans. © 2009 Elsevier Ltd.


Tallent N.R.,Pacific Northwest National Laboratory | Hoisie A.,Pacific Northwest National Laboratory
Proceedings of the International Conference on Supercomputing | Year: 2014

Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm (Performance and Architecture Lab Modeling tool), a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. First, Palm provides a source code modeling annotation language for abstracting or expressing complexity. Second, Palm generates hierarchical models according to well-defined rules. Since a model's hierarchy is defined by static and dynamic source code structure, there is a link between a program's organization and its model. By coordinating models and source code, Palm's models are 'first-class' and reproducible. Third, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications. © 2014 ACM.


Liu W.,Pacific Northwest National Laboratory | Canfield N.,Pacific Northwest National Laboratory
Journal of Membrane Science | Year: 2012

A reactive material processing method is developed in this work for fabrication of thin (25-200μm) porous metal sheets of sub-micrometer pore sizes. A green sheet is first prepared by tape-casting of metal oxide-loaded slurry. Then, the green sheet is converted into a porous metallic structure through a high-temperature reduction process. Detailed preparation processes and characterization results are delineated with Ni materials as examples. The resulting 50μm-thick Ni sheet of about 50% porosity looks like a metal foil, which is strong enough to be self-supported and flexible to bending. The gas permeance and permeability, depending on the sheet thickness and preparation conditions, is ranged from 5.0×10 -5 to 4.0×10 -4mol/(m 2Pas) and from 1.5×10 7 to 8.5×10 7Barrer, respectively. Such porous metal sheets may be used directly as a micro-filtration membrane or serve as an inorganic membrane support platform. This innovation makes it possible to develop thin flat sheet inorganic membranes with surface area packing density similar or comparable to polymeric membrane sheets. © 2012.


Bond-Lamberty B.,Pacific Northwest National Laboratory | Thomson A.,Pacific Northwest National Laboratory
Biogeosciences | Year: 2010

Soil respiration - Rs, the flux of CO2 from the soil to the atmosphere - is probably the least well constrained component of the terrestrial carbon cycle. Here we introduce the SRDB database, a near-universal compendium of published Rs data, and make it available to the scientific community both as a traditional static archive and as a dynamic community database that may be updated over time by interested users. The database encompasses all published studies that report one of the following data measured in the field (not laboratory): annual Rs, mean seasonal Rs, a seasonal or annual partitioning of Rs into its sources fluxes, Rs temperature response (Q10), or R s at 10 °C. Its orientation is thus to seasonal and annual fluxes, not shorter-term or chamber-specific measurements. To date, data from 818 studies have been entered into the database, constituting 3379 records. The data span the measurement years 1961-2007 and are dominated by temperate, well-drained forests. We briefly examine some aspects of the SRDB data - its climate space coverage, mean annual Rs fluxes and their correlation with other carbon fluxes, Rs variability, temperature sensitivities, and the partitioning of Rs source flux - and suggest some potential lines of research that could be explored using these data. The SRDB database is available online in a permanent archive as well as via a project-hosting repository; the latter source leverages open-source software technologies to encourage wider participation in the database's future development. Ultimately, we hope that the updating of, and corrections to, the SRDB will become a shared project, managed by the users of these data in the scientific community. © Author(s) 2010.


Burd S.D.,The University of Tampa | Ma S.,The University of Tampa | Perman J.A.,The University of Tampa | Sikora B.J.,Northwestern University | And 5 more authors.
Journal of the American Chemical Society | Year: 2012

A previously known class of porous coordination polymer (PCP) of formula [Cu(bpy-n) 2(SiF 6)] (bpy-1 = 4,4′-bipyridine; bpy-2 = 1,2-bis(4-pyridyl)ethene) has been studied to assess its selectivity toward CO 2, CH 4, N 2, and H 2O. Gas sorption measurements reveal that [Cu(bpy-1) 2(SiF 6)] exhibits the highest uptake for CO 2 yet seen at 298 K and 1 atm by a PCP that does not contain open metal sites. Significantly, [Cu(bpy-1) 2(SiF 6)] does not exhibit particularly high uptake under the same conditions for CH 4, N 2, and, H 2O, presumably because of its lack of open metal sites. Consequently, at 298 K and 1 atm [Cu(bpy-1) 2(SiF 6)] exhibits a relative uptake of CO 2 over CH 4 of ca. 10.5:1, the highest value experimentally observed in a compound without open metal sites. [Cu(bpy-2) 2(SiF 6)] exhibits larger pores and surface area than [Cu(bpy-1) 2(SiF 6)] but retains a high CO 2/CH 4 relative uptake of ca. 8:1. © 2012 American Chemical Society.


Baer D.R.,Pacific Northwest National Laboratory | Engelhard M.H.,Pacific Northwest National Laboratory
Journal of Electron Spectroscopy and Related Phenomena | Year: 2010

This paper examines the types of information that XPS can provide about a variety of nanostructured materials. Although it is sometimes not considered a "nanoscale analysis method," XPS can provide a great deal of information about elemental distributions, layer or coating structure and thicknesses, surface functionality, and even particles sizes on the 1-20 nm scale for sample types that may not be readily analyzed by other methods. This information is important for both synthetic nanostructured or nanosized materials and a variety of natural materials with nanostructure. Although the links between nanostructure materials and biological systems may not at first be obvious, many biological molecules and some organisms are the sizes of nanoparticles. The nanostructure of cells and microbes plays a significant role in how they interact with their environment. The interaction of biomolecules with nanoparticles is important for medical and toxicity studies. The interaction of biomolecules is important for sensor function and many nanomaterials are now the active elements in sensors. This paper first discusses how nanostructures influences XPS data as a part of understanding how simple models of sample structure and data analysis can be used to extract information about the physical and chemical structures of the materials being analyzed. Equally important, aspects of sample and analysis limitations and challenges associated with understanding nanostructured materials are indicated. Examples of the application of XPS to nanostructured and biological systems and materials are provided. © 2009.


Lv D.,Pennsylvania State University | Gordin M.L.,Pennsylvania State University | Yi R.,Pennsylvania State University | Xu T.,Pennsylvania State University | And 4 more authors.
Advanced Functional Materials | Year: 2014

A self-assembled GeOx/reduced graphene oxide (GeO x/RGO) composite, where GeOx nanoparticles are grown directly on reduced graphene oxide sheets, is synthesized via a facile one-step reduction approach and studied by X-ray diffraction, transmission electron microscopy, energy dispersive X-ray spectroscopy, electron energy loss spectroscopy elemental mapping, and other techniques. Electrochemical evaluation indicates that incorporation of reduced graphene oxide enhances both the rate capability and reversible capacity of GeOx, with the latter being due to the RGO enabling reversible utilization of Li2O. The composite delivers a high reversible capacity of 1600 mAh g-1 at a current density of 100 mA g-1, and still maintains a capacity of 410 mAh g-1 at a high current density of 20 A g-1. Owing to the flexible reduced graphene oxide sheets enwrapping the GeOx particles, the cycling stability of the composite is also improved significantly. To further demonstrate its feasibility in practical applications, the synthesized GeOx/RGO composite anode is successfully paired with a high voltage LiNi0.5Mn1.5O4 cathode to form a full cell, which shows good cycling and rate performance. A GeOx/reduced graphene oxide (RGO) composite, where amorphous GeOx (1.01 < x < 1.07) nanoparticles are in intimate contact with well-dispersed RGO sheets, is successfully synthesized via a facile one-step reduction approach. The introduction of conductive RGO sheets into germanium oxide is a simple but effective strategy for enhancing the reversible utilization of Li2O and improving rate capability for Li+ storage in germanium oxides. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.


Lichtner P.C.,Los Alamos National Laboratory | Hammond G.E.,Pacific Northwest National Laboratory
Vadose Zone Journal | Year: 2012

Evolution of a hexavalent uranium [U(VI)] plume at the Hanford 300 Area bordering the Columbia River was investigated to evaluate the roles of labile and nonlabile forms of U(VI) on the longevity of the plume. A high fidelity, three-dimensional, field-scale, reactive flow and transport model was used to represent the system. Richards' equation coupled to multicomponent reactive transport equations were solved for times up to 100 yr, taking into account rapid fluctuations in the Columbia River stage resulting in pulse releases of U(VI) into the river. The petascale computer code PFLOTRAN developed under a Department of Energy Scientific Discovery through Advanced Computing (SciDAC-2) project was used in the simulations and executed on Oak Ridge National Laboratory's Jaguar XT5 Cray supercomputer. Labile U(VI) was represented in the model through surface complexation reactions and its nonlabile form through dissolution of metatorbernite used as a surrogate mineral. Initial conditions were constructed corresponding to the U(VI) plume already in place to avoid uncertainties associated with the lack of historical data for the waste stream. The cumulative U(VI) flux into the river was compared for cases of equilibrium and multirate sorption models and for no sorption, and its sensitivity on the initial plume configuration was investigated. The presence of nonlabile U(VI) was found to be essential in explaining the longevity of the U(VI) plume and the prolonged high U(VI) concentrations at the site exceeding the USEPA maximum contaminant level for U(VI). © Soil Science Society of America.


Huang M.,Pacific Northwest National Laboratory | Huang M.,Stanford University | Asner G.P.,Stanford University
Global Biogeochemical Cycles | Year: 2010

Amazon deforestation contributes significantly to global carbon (C) emissions. In comparison, the contribution from selective logging to atmospheric CO2 emissions, and its impact on regional C dynamics, is highly uncertain. Using a new geographically based modeling approach in combination with high resolution remote sensing data from 1999 to 2002, we estimate that C emissions were 0.04-0.05 Pg C yr-1 due to selective logging from a ∼2,664,960 km2 region of the Brazilian Amazon. Selective logging was responsible for 15-19% higher carbon emissions than reported from deforestation (clear-cutting) alone. Our simulations indicated that forest carbon lost via selective logging lasts two to three decades following harvest, and that the original live biomass takes up to a century to recover, if the forests are not subsequently cleared. The two-to three-decade loss of carbon results from the biomass damaged by logging activities, including leaves, wood, and roots, estimated to be 89.1 Tg C yr-1 from 1999 to 2002 over the study region, leaving 70.0 Tg C yr-1 and 7.9 Tg C yr-1 to accumulate as coarse woody debris and soil C, respectively. While avoided deforestation is central to crediting rain forest nations for reduced carbon emissions, the extent and intensity of selective logging are also critical to determining carbon emissions in the context of Reduced Emissions from Deforestation and Forest Degradation (REDD). We show that a combination of automated high-resolution satellite monitoring and detailed forest C modeling can yield spatially explicit estimates of harvest-related C losses and subsequent recovery in support of REDD and other international carbon market mechanisms. © 2010 by the American Geophysical Union.


Henderson M.A.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

The chemistry of trimethyl phosphate (TMP) was examined on the (012) crystallographic face of hematite (α-Fe 2O 3) using temperature-programmed desorption (TPD), high resolution electron energy loss spectroscopy (HREELS), static secondary ion mass spectrometry (SSIMS), and Auger electron spectroscopy (AES). TMP adsorbed at Fe 3+ sites on the clean α-Fe 2O 3(012) surface through lone pair electrons on the P=O oxygen atom. A small portion of adsorbed TMP desorbed without decomposition; however, the majority of adsorbed TMP decomposed on the clean surface in a two-step process. The first step, occurring at or below room temperature, involved displacement of one methoxy group of TMP to form a surface methoxy and adsorbed dimethyl phosphate (DMP). In the second step, adsorbed DMP decomposed above 500 K to a 1:1 ratio of gaseous methanol and formaldehyde leaving phosphate on the surface. The phosphate was stable on the α-Fe 2O 3(012) surface to 950 K. Identification of these steps was assisted by using the chemistry of methanol on the clean surface. Coadsorption of TMP and water led to a small degree of hydrolysis between these two molecules in the multilayer but no significant changes in the chemistry of TMP molecules adsorbed on the surface. © 2011 American Chemical Society.


Geelhood K.,Pacific Northwest National Laboratory
Nuclear Engineering and Technology | Year: 2011

FRAPCON-3.4a and FRAPTRAN 1.4 are the most recent versions of the U.S. Nuclear Regulatory Commission (NRC) steady-state and transient fuel performance codes, respectively. These codes have been assessed against separate effects data and integral assessment data and have been determined to provide a best estimate calculation of fuel performance. Recent updates included in FRAPCON-3.4a include updated material properties models, models for new fuel and cladding types, cladding finite element analysis capability, and capability to perform uncertainty analyses and calculate upper tolerance limits for important outputs. Recent updates included in FRAPTRAN 1.4 include: material properties models that are consistent with FRAPCON-3.4a, cladding failure models that are applicable for loss-of coolant-accident and reactivity initiated accident modeling, and updated heat transfer models. This paper briefly describes these code updates and data assessments, highlighting the particularly important improvements and data assessments. This paper also discusses areas of improvements that will be addressed in upcoming code versions.


Fang M.,Pacific Northwest National Laboratory | Engelhard M.H.,Pacific Northwest National Laboratory | Zhu Z.,Pacific Northwest National Laboratory | Helm M.L.,Pacific Northwest National Laboratory | Roberts J.A.S.,Pacific Northwest National Laboratory
ACS Catalysis | Year: 2014

Films electrodeposited onto glassy carbon electrodes from acidic acetonitrile solutions of [Bu4N][Ni(bdt)2] (bdt = 1,2-benzenedithiolate) are active toward electrocatalytic hydrogen production at potentials 0.2-0.4 V positive of untreated electrodes. This activity is preserved when the electrode is rinsed and transferred to a fresh acid solution. X-ray photoelectron spectra indicate that the deposited material contains Ni and S, and time-of-flight secondary ion mass spectrometry shows that electrodeposition decomposes the Ni(bdt)2 assembly. Correlations between voltammetric and spectroscopic results indicate that the deposited material is active, i.e., that catalysis is heterogeneous rather than homogeneous. Control experiments establish that obtaining the observed catalytic response requires both Ni and the 1,2-benzenedithiolate ligand to be present during deposition. © 2013 American Chemical Society.


Li Z.,Pacific Northwest National Laboratory | Smith R.S.,Pacific Northwest National Laboratory | Kay B.D.,Pacific Northwest National Laboratory | Dohnalek Z.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

The absolute coverages of water and small aliphatic alcohols (C1-C4) were determined on TiO2(110) using a combination of temperature-programmed desorption and liquid nitrogen cooled quartz crystal microbalance measurements. The absolute saturation coverages of water on Ti4+ and bridging oxygen, Ob, sites are found to be equal to 1 monolayer with respect to the number of Ti4+ and/or Ob sites (1 ML ≡ 5.2 × 1014 cm-2) in good agreement with prior studies. The saturation coverages of primary alcohols on Ti4+ sites are found to be approximately constant and equal to 0.77 ML. This indicates that the increasing length of the alkyl chains does not contribute to added steric hindrance. Additional steric hindrance is observed with increased branching as shown for secondary alcohols and tertiary t-butanol where the saturation coverage decreases to 0.62 and 0.44 ML, respectively. On Ob rows, a monotonic decrease of the alcohol coverages is observed for both increasing length of the alkyl chains and the chain number. © 2011 American Chemical Society.


Masica D.L.,Johns Hopkins University | Gray J.J.,Johns Hopkins University | Shaw W.J.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

The formation of biogenic materials requires the interaction of organic molecules with the mineral phase. In forming enamel, the amelogenin proteins contribute to the mineralization of hydroxyapatite (HAp). Leucine-rich amelogenin protein (LRAP) is a naturally occurring splice variant of amelogenin that comprises amelogenins predicted HAp binding domains. We determined the partial structure of phosphorylated and non-phosphorylated LRAP variants bound to HAp using combined solid-state NMR (ssNMR) and ssNMR-biased computational structure prediction. New ssNMR measurements in the N-terminus indicate a largely extended structure for both variants, though some measurements are consistent with a partially helical N-terminal segment. The N-terminus of the phosphorylated variant is found to be consistently closer to the HAp surface than the non-phosphorylated variant. Structure prediction was biased using 21 ssNMR measurements in the N-and C-terminus at five HAp crystal faces. The predicted fold of LRAP is similar at all HAp faces studied, regardless of phosphorylation. Largely consistent with experimental observations, LRAPs predicted structure is relatively extended with a helix-turn-helix motif in the N-terminal domain and some helix in the C-terminal domain, and the N-terminal domain of the phosphorylated variant binds HAp more closely than the N-terminal domain of the non-phosphorylated variant. Both predictions and experiments are consistent with structural heterogeneity of the surface adsorbed protein. Predictions for both variants show some potential binding specificity for the {010} HAp crystal face, providing further support that amelogenins block crystal growth on the a and b faces to allow elongated crystals in the c-axis. © 2011 American Chemical Society.


Kwak J.H.,Pacific Northwest National Laboratory | Peden C.H.F.,Pacific Northwest National Laboratory | Szanyi J.,Pacific Northwest National Laboratory
Journal of Physical Chemistry C | Year: 2011

In this work, we investigated the phase transformation of γ-Al 2O 3 to θ-Al 2O 3 by ethanol TPD and XRD. Ethanol TPD showed remarkable sensitivity toward the surface structures of the aluminas studied. Maximum desorption rates for the primary product of ethanol adsorption, ethylene, were observed at 225, 245, and 320 °C over γ-, θ-, and α-Al 2O 3, respectively. Ethanol TPD over a γ-Al 2O 3 sample calcined at 800 °C clearly shows that the surface of the resulting material possesses θ-alumina characteristics, even though only the γ-alumina phase was detected by XRD. These results strongly suggest that the γ-to-θ phase transformation of alumina initiates at oxide particle surfaces. The results obtained are also consistent with our previous finding that the presence of penta-coordinated Al 3+ sites, formed on the (100) facets of the alumina surface, is strongly correlated with the thermal stability of γ-alumina. © 2011 American Chemical Society.


Xu Y.,Oak Ridge National Laboratory | Shelton W.A.,Oak Ridge National Laboratory | Shelton W.A.,Pacific Northwest National Laboratory
Journal of the Electrochemical Society | Year: 2011

Li-air batteries have attracted substantial interest for their high theoretical specific energies, but the oxygen reduction reaction by Li (Li-ORR) that occurs at the carbon cathode remains poorly understood. Periodic density functional theory calculations have been performed to examine the Li-ORR on several model carbon structures, including the graphite(0001) basal plane, the (8,0) single-wall nanotube, the armchair-type edge, and a di-vacancy in the basal plane. The inertness of the basal plane limits the reversible potential of O 2 reduction to 1.1 V, and slightly higher to 1.2 V on the curved nanotube. The armchair edge and di-vacancy are highly reactive and significantly oxidized at ambient conditions to various CO x groups, which are reduced by Li via redox mechanisms at 1.2-1.4 V. These CO x groups can also catalyze O 2 reduction at up to 2.3 V (an overpotential of 0.4 V vs. the calculated equilibrium potential for bulk Li 2O 2 formation) by chelating and stabilizing the LiO 2 intermediate. The Li-ORR on graphitic carbon, if via concerted Li/e - transfer and involving carbon, lithium, and oxygen only, is therefore expected to initiate with the smallest overpotential at under-coordinated carbon centers that are oxidized at ambient conditions. © 2011 The Electrochemical Society.


Yoon J.-H.,Pacific Northwest National Laboratory
Atmospheric Science Letters | Year: 2016

Amazon rainfall is subject to year-to-year fluctuation resulting in drought and flood in various intensities. A major climatic driver of the interannual variation of the Amazon rainfall is El Niño/Southern Oscillation. Also, the sea surface temperature over the Atlantic Ocean is identified as an important climatic driver on the Amazon water cycle. Previously, observational data sets were used to support the Atlantic influence on Amazon rainfall. Here, it is found that multiple global climate models do reproduce the Atlantic-Amazon link robustly. However, there exist differences in rainfall response, which primarily depends on the climatological rainfall amount. © 2016 Royal Meteorological Society.


Miliordos E.,Pacific Northwest National Laboratory | Xantheas S.S.,Pacific Northwest National Laboratory
Journal of the American Chemical Society | Year: 2014

We investigate the bonding mechanism in ozone (O3) and its sulfur-substituted analogues, SO2, OS2, and S3. By analyzing their ground-state multireference configuration interaction wave functions, we demonstrate that the bonding in these systems can be represented as a mixture of a closed-shell structure with one and a half bonds between the central and terminal atoms and an open-shell structure with a single bond and two lone electrons on each terminal atom (biradical). The biradical character (β) further emerges as a simple measure of the relative contribution of those two classical Lewis structures emanating from the interpretation of the respective wave functions. Our analysis yields a biradical character of 3.5% for OSO, 4.4% for SSO, 11% for S3, 18% for O3, 26% for SOO, and 35% for SOS. The size/electronegativity of the end atoms relative to the central one is the prevalent factor for determining the magnitude of β: smaller and more electronegative central atoms better accommodate a pair of electrons facilitating the localization of the remaining two lone π-electrons on each of the end atoms, therefore increasing the weight of the second picture in the mixed bonding scenario (larger β). The proposed mixture of these two bonding scenarios allows for the definition of the bond order of the covalent bonds being (3-β)/2, and this accounts for the different O-O, S-S, or S-O bond lengths in the triatomic series. The biradical character was furthermore found to be a useful concept for explaining several structural and energetic trends in the series: larger values of β mark a smaller singlet-triplet splitting, closer bond lengths in the ground 1A′ and the first excited 3A′ states, and larger bond dissociation and atomization energies in the ground state. The latter explains the relative energy difference between the OSS/SOS and OOS/OSO isomers due to their different β values. © 2014 American Chemical Society.


McClain C.R.,National Evolutionary Synthesis Center | Stegen J.C.,University of North Carolina at Chapel Hill | Stegen J.C.,Pacific Northwest National Laboratory | Hurlbert A.H.,University of North Carolina at Chapel Hill
Proceedings of the Royal Society B: Biological Sciences | Year: 2012

Patterns of beta-diversity or distance decay at oceanic scales are completely unknown for deep-sea communities. Even when appropriate data exist, methodological problems have made it difficult to discern the relative roles of environmental filtering and dispersal limitation for generating faunal turnover patterns. Here, we combine a spatially extensive dataset on deep-sea bivalves with a model incorporating ecological dynamics and shared evolutionary history to quantify the effects of environmental filtering and dispersal limitation. Both the model and empirical data are used to relate functional, taxonomic and phylogenetic similarity between communities to environmental and spatial distances separating them for 270 sites across the Atlantic Ocean. This study represents the first ocean-wide analysis examining distance decay as a function of a broad suite of explanatory variables.We find that both strong environmental filtering and dispersal limitation drive turnover in taxonomic, functional and phylogenetic composition in deep-sea bivalves, explaining 26 per cent, 34 per cent and 9 per cent of the variation, respectively. This contrasts with previous suggestions that dispersal is not limiting in broad-scale biogeographic and biodiversity patterning in marine systems. However, rates of decay in similarity with environmental distance were eightfold to 44-fold steeper than with spatial distance. Energy availability is the most influential environmental variable evaluated, accounting for 3.9 per cent, 9.4 per cent and 22.3 per cent of the variation in functional, phylogenetic and taxonomic similarity, respectively. Comparing empirical patterns with process-based theoretical predictions provided quantitative estimates of dispersal limitation and niche breadth, indicating that 95 per cent of deep-sea bivalve propagules will be able to persist in environments that deviate from their optimum by up to 2.1 g m -2 yr -1 and typically disperse 749 km from their natal site. © 2011 The Royal Society.


Chaturvedi V.,The Water Council | Kim S.H.,Pacific Northwest National Laboratory
Energy Policy | Year: 2015

With high reliance on light-duty vehicles in the present, the future of global transportation system is also geared towards private modes, which has significant energy and emission implications. Public transportation has been argued as an alternative strategy for meeting the rising transportation demands of the growing world, especially the poor, in a sustainable and energy efficient way. The present study analyzes an important yet under-researched question - what are the long-term energy and emission implications of an electric rail based passenger transportation system for meeting both long and short distance passenter transportation needs? We analyze a suite of electric rail share scenarios with and without climate policy. In the reference scenario, the transportation system will evolve towards dominance of fossil based light-duty vehicles. We find that an electric rail policy is more successful than an economy wide climate policy in reducing transport sector energy demand and emissions. Economy wide emissions however can only be reduced through a broader climate policy, the cost of which can be reduced by hundreds of billions of dollars across the century when implemented in combination with the transport sector focused electric rail policy. Moreover, higher share of electric rail enhances energy security for oil importing nations and reduces vehicular congestion and road infrastructure requirement as well. © 2014 Elsevier Ltd.


Hurlbert A.H.,University of North Carolina at Chapel Hill | Stegen J.C.,Pacific Northwest National Laboratory
Frontiers in Genetics | Year: 2014

We use a simulation model to examine four of the most common hypotheses for the latitudinal richness gradient and identify patterns that might be diagnostic of those four hypotheses. The hypotheses examined include (1) tropical niche conservatism, or the idea that the tropics are more diverse because a tropical clade origin has allowed more time for diversification in the tropics and has resulted in few species adapted to extra-tropical climates. (2) The ecological limits hypothesis suggests that species richness is limited by the amount of biologically available energy in a region. (3) The speciation rates hypothesis suggests that the latitudinal gradient arises from a gradient in speciation rates. (4) Finally, the tropical stability hypothesis argues that climatic fluctuations and glacial cycles in extratropical regions have led to greater extinction rates and less opportunity for specialization relative to the tropics. We found that tropical niche conservatism can be distinguished from the other three scenarios by phylogenies which are more balanced than expected, no relationship between mean root distance (MRD) and richness across regions, and a homogeneous rate of speciation across clades and through time. The energy gradient, speciation gradient, and disturbance gradient scenarios all produced phylogenies which were more imbalanced than expected, showed a negative relationship between MRD and richness, and diversity-dependence of speciation rate estimates through time. We found that the relationship between speciation rates and latitude could distinguish among these three scenarios, with no relation expected under the ecological limits hypothesis, a negative relationship expected under the speciation rates hypothesis, and a positive relationship expected under the tropical stability hypothesis. We emphasize the importance of considering multiple hypotheses and focusing on diagnostic predictions instead of predictions that are consistent with multiple hypotheses. © 2014 Hurlbert and Stegen.


Dutta A.,Pacific Northwest National Laboratory | Lense S.,Pacific Northwest National Laboratory | Hou J.,Pacific Northwest National Laboratory | Engelhard M.H.,Pacific Northwest National Laboratory | And 2 more authors.
Journal of the American Chemical Society | Year: 2013

Hydrogenase enzymes use first-row transition metals to interconvert H 2 with protons and electrons, reactions that are important for the storage and recovery of energy from intermittent sources such as solar, hydroelectric, and wind. Here we present Ni(PCy 2N Gly 2)2, a water-soluble molecular electrocatalyst with the amino acid glycine built into the diphosphine ligand framework. Proton transfer between the outer coordination sphere carboxylates and the second coordination sphere pendant amines is rapid, as observed by cyclic voltammetry and FTIR spectroscopy, indicating that the carboxylate groups may participate in proton transfer during catalysis. This complex oxidizes H2 (1-33 s-1) at low overpotentials (150-365 mV) over a range of pH values (0.1-9.0) and produces H2 under identical solution conditions (>2400 s-1 at pH 0.5). Enzymes employ proton channels for the controlled movement of protons over long distances - the results presented here demonstrate the effects of a simple two-component proton channel in a synthetic molecular electrocatalyst. © 2013 American Chemical Society.


Singhal S.C.,Pacific Northwest National Laboratory
Wiley Interdisciplinary Reviews: Energy and Environment | Year: 2014

Solid oxide fuel cells (SOFCs), based on an oxide ion conducting electrolyte, offer a clean, low-pollution technology to electrochemically generate electricity at high efficiencies. These fuel cells provide many advantages over traditional energy conversion systems including high efficiency, reliability, modularity, fuel adaptability, and very low levels of SOx and NOx emissions. Quiet, vibration-free operation of SOFCs also eliminates noise usually associated with conventional power generation systems. Furthermore, because of their high operation temperature (600-1000°C), some hydrocarbon fuels such as natural gas can be reformed within the cell stack eliminating the need for an expensive, external reformer. In spite of these advantages, the degree and extent of their market penetration really depends on the ability to reduce the cost of SOFC-based power systems while ensuring their long-term durability. This article reviews the cell and stack materials, cell designs, and present commercial status of power systems built using SOFCs. © 2013 John Wiley & Sons, Ltd.


Merkley E.D.,Pacific Northwest National Laboratory | Metz T.O.,Pacific Northwest National Laboratory | Smith R.D.,Pacific Northwest National Laboratory | Baynes J.W.,University of South Carolina | Frizzell N.,University of South Carolina
Mass Spectrometry Reviews | Year: 2014

The post-translational modifications (PTMs) of cysteine residues include oxidation, S-glutathionylation, S-nitrosylation, and succination, all of which modify protein function or turnover in response to a changing intracellular redox environment. Succination is a chemical modification of cysteine in proteins by the Krebs cycle intermediate, fumarate, yielding S-(2-succino)cysteine (2SC). Intracellular fumarate concentration and succination of proteins are increased by hyperpolarization of the inner mitochondrial membrane, in concert with mitochondrial, endoplasmic reticulum (ER) and oxidative stress in 3T3 adipocytes grown in high glucose medium and in adipose tissue in obesity and diabetes in mice. Increased succination of proteins is also detected in the kidney of a fumarase deficient conditional knock-out mouse which develops renal cysts. A wide range of proteins are subject to succination, including enzymes, adipokines, cytoskeletal proteins, and ER chaperones with functional cysteine residues. There is also some overlap between succinated and glutathionylated proteins, suggesting that the same low pK a thiols are targeted by both. Succination of adipocyte proteins in diabetes increases as a result of nutrient excess derived mitochondrial stress and this is inhibited by uncouplers, which discharge the mitochondrial membrane potential (ΔΨm) and relieve the electron transport chain. 2SC therefore serves as a biomarker of mitochondrial stress or dysfunction in chronic diseases, such as obesity, diabetes, and cancer, and recent studies suggest that succination is a mechanistic link between mitochondrial dysfunction, oxidative and ER stress, and cellular progression toward apoptosis. In this article, we review the history of the succinated proteome and the challenges associated with measuring this non-enzymatic PTM of proteins by proteomics approaches. © 2013 Wiley Periodicals, Inc.


Lopata K.,Pacific Northwest National Laboratory | Govind N.,Pacific Northwest National Laboratory
Journal of Chemical Theory and Computation | Year: 2011

The response of matter to external fields forms the basis for a vast wealth of fundamental physical processes ranging from light harvesting to nanoscale electron transport. Accurately modeling ultrafast electron dynamics in excited systems thus offers unparalleled insight but requires an inherently nonlinear time-resolved approach. To this end, an efficient and massively parallel real-time real-space time-dependent density functional theory (RT-TDDFT) implementation in NWChem is presented. The implementation is first validated against linear-response TDDFT and experimental results for a series of molecules subjected to small electric field perturbations. Second, nonlinear excitation of green fluorescent protein is studied, which shows a blue-shift in the spectrum with increasing perturbation, as well as a saturation in absorption. Next, the charge dynamics of optically excited zinc porphyrin is presented in real time and real space, with relevance to charge injection in photovoltaic devices. Finally, intermolecular excitation in an adenine-thymine base pair is studied using the BNL range separated functional [Baer, R.; Neuhauser, D.Phys. Rev. Lett. 2005, 94, 043002], demonstrating the utility of a real-time approach in capturing charge transfer processes. © 2011 American Chemical Society.


Ma W.,Ohio State University | Krishnamoorthy S.,Pacific Northwest National Laboratory | Villa O.,Pacific Northwest National Laboratory | Kowalski K.,Pacific Northwest National Laboratory
Journal of Chemical Theory and Computation | Year: 2011

The details of the graphical processing unit (GPU) implementation of the most computationally intensive (T)-part of the recently introduced regularized CCSD(T) (Reg-CCSD(T)) method [Kowalski, K.; Valiev, M.J. Chem. Phys. 2009, 131, 234107] for calculating electronic energies of strongly correlated systems are discussed. Parallel tests performed for several molecular systems show very good scalability of the triples part of the Reg-CCSD(T) approach. We also discuss the performance of the Reg-CCSD(T) GPU implementation as a function of the parameters defining the partitioning of the spinorbital domain (tiling structure). The accuracy of the Reg-CCSD(T) method is illustrated on three examples: the methyfluoride molecule, dissociation of dodecane, and open-shell Spiro cation (5,5′(4H,4H′)-spirobi[cyclopenta[c]pyrrole] 2,2′,6,6′-tetrahydro cation), which is a frequently used model to study electron transfer processes. It is demonstrated that a simple regularization of the cluster amplitudes used in the noniterative corrections accounting for the effect of triply excited configurations significantly improves the accuracies of ground-state energies in the presence of strong quasidegeneracy effects. For methylfluoride, we compare the Reg-CCSD(T) results with the CR-CC(2,3) and CCSDT energies, whereas for Spiro cation we compare Reg-CCSD(T) results with the energies obtained with completely renormalized CCSD(T) method. Performance tests for the Spiro, dodecane, and uracil molecules are also discussed. © 2011 American Chemical Society.


Velarde L.,Pacific Northwest National Laboratory | Wang H.-F.,Pacific Northwest National Laboratory
Chemical Physics Letters | Year: 2013

The relative phase and amplitude ratio between the ssp and ppp polarization combinations of the vibrational sum-frequency generation (SFG) response can be uniquely and accurately determined by the polarization null angle (PNA) method for orientational analysis. With PNA measurement we show that the -CN group in the 4-n-pentyl-4′-cyanoterphenyl (5CT) Langmuir monolayer is tilted around 25° ± 2° from the interface normal, while that in the 4-n-octyl-4′-cyanobiphenyl (8CB) is tilted around 57° ± 2°, consistent with the significant differences in the phase diagrams and hydrogen bonding SFG spectra of the two Langmuir monolayers reported in the literature. © 2013 Elsevier B.V. All rights reserved.


Aquino F.,State University of New York at Buffalo | Govind N.,Pacific Northwest National Laboratory | Autschbach J.,State University of New York at Buffalo
Journal of Chemical Theory and Computation | Year: 2011

Density functional theory (DFT) calculations of NMR chemical shifts and molecular g tensors with Gaussian-type orbitals are implemented via second-order energy derivatives within the scalar relativistic zeroth order regular approximation (ZORA) framework. Nonhybrid functionals, standard (global) hybrids, and range-separated (Coulomb-attenuated, long-range corrected) hybrid functionals are tested. Origin invariance of the results is ensured by use of gauge-including atomic orbital (GIAO) basis functions. The new implementation in the NWChem quantum chemistry package is verified by calculations of nuclear shielding constants for the heavy atoms in HX (X = F, Cl, Br, I, At) and H 2X (X = O, S, Se, Te, Po) and 125Te chemical shifts in a number of tellurium compounds. The basis set and functional dependence of g-shifts is investigated for 14 radicals with light and heavy atoms. The problem of accurately predicting 19F NMR shielding in UF 6-nCln, n = 1-6, is revisited. The results are sensitive to approximations in the density functionals, indicating a delicate balance of DFT self-interaction vs correlation. For the uranium halides, the range-separated functionals are not clearly superior to global hybrids. © 2011 American Chemical Society.


Schneider K.P.,Pacific Northwest National Laboratory | Weaver T.F.,American Electric Power
IEEE Transactions on Smart Grid | Year: 2014

In a regulated business environment, a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. While there are well established procedures for determining the benefits derived from the deployment of traditional technologies, the same procedures do not exist for many emerging technologies. Volt-VAR Optimization is an example of an emerging technology that is being deployed across the nation without a standardized method for determining system performance and benefits. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to presenting the methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the presented method. © 2014 IEEE.


Liu T.,Pacific Northwest National Laboratory | Dubois M.R.,Pacific Northwest National Laboratory | Dubois D.L.,Pacific Northwest National Laboratory | Bullock R.M.,Pacific Northwest National Laboratory
Energy and Environmental Science | Year: 2014

Two Ru hydride complexes, Cp∗Ru(PPh 2NBn 2)H (1-H) and Cp∗Ru(PtBu 2NBn 2)H (2-H) supported by cyclic PR 2NR' 2 ligands (Cp∗ = η5-C5Me5; PR 2NBn 2 = 1,5-dibenzyl,-3,7-R-1,5-diaza-3,7-diphosphacyclooctane, where R = Ph or tBu) have been developed as electrocatalysts for oxidation of H2 (1.0 atm, 22 °C). The turnover frequency of 2-H is 1.2 s-1 at 22 °C (1.0 atm H2) with an overpotential at Ecat/2 of 0.5 V in the presence of exogenous base, DBU (1,8-diazabicyclo[5.4.0]undec-7-ene), while catalysis by 1-H has a turnover frequency of 0.6 s-1 and an overpotential of 0.6 V at Ecat/2. Addition of H2O facilitates oxidation of H2 by 2-H and increases its turnover frequency to 1.9 s-1, while H2O slows down the catalysis by 1-H. In addition, studies of Cp∗Ru(dmpm)H (where dmpm = bis(dimethylphosphino)methane), a control complex lacking pendent amines in its diphosphine ligand, confirms the critical roles of the pendant amines of the P2N2 ligands as proton relays in the oxidation of H2. © 2014 the Partner Organisations.


Du P.,Pacific Northwest National Laboratory | Makarov Y.,Pacific Northwest National Laboratory
IEEE Transactions on Power Systems | Year: 2014

Insufficient frequency response can lead to balancing, control performance issues, and even reliability problems. This letter proposes a new method to evaluate the primary frequency response for large power system interconnections. This approach is evaluated using actual disturbance data from the Western Electricity Coordinating Council. © 2013 IEEE.


Wiedner E.S.,Pacific Northwest National Laboratory | Bullock R.M.,Pacific Northwest National Laboratory
Journal of the American Chemical Society | Year: 2016

A large variety of molecular cobalt complexes are used as electrocatalysts for H2 production, but the key cobalt hydride intermediates are frequently difficult to detect and characterize due to their high reactivity. We report that a combination of variable scan rate cyclic voltammetry and foot-of-the-wave analysis (FOWA) can be used to detect transient CoIIIH and CoIIH intermediates of electrocatalytic H2 production by [CoII(PtBu 2NPh 2)(CH3CN)3]2+ and CoII(dmgBF2)2(CH3CN)2. In both cases, reduction of a transient catalytic intermediate occurs at a potential that coincides with the CoII/I couple. Each reduction displays quasireversible electron-transfer kinetics, consistent with reduction of a CoIIIH intermediate to CoIIH, which is then protonated by acid to generate H2. A bridge-protonated CoI species was ruled out as a catalytic intermediate for CoII(dmgBF2)2(CH3CN)2 from voltammograms recorded at 1000 psi of H2. Density functional theory was used to calculate CoIII-H and CoII-H bond strengths for both catalysts. Despite having very different ligands, the cobalt hydrides of both catalysts possess nearly identical heterolytic and homolytic Co-H bond strengths for the CoIIIH and CoIIH intermediates. © 2016 American Chemical Society.


Gao Y.,Pacific Northwest National Laboratory | Lu J.,Pacific Northwest National Laboratory | Leung L.R.,Pacific Northwest National Laboratory
Journal of Climate | Year: 2016

This study investigates the North Atlantic atmospheric rivers (ARs) making landfall over western Europe in the present and future climate from the multimodel ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5). Overall, CMIP5 captures the seasonal and spatial variations of historical landfalling AR days, with the large intermodel variability strongly correlated with the intermodel spread of historical near-surface westerly jet position. Under representative concentration pathway 8.5 (RCP8.5), AR frequency is projected to increase significantly by the end of this century, with 127%-275% increase at peak AR frequency regions (45°-55°N). While thermodynamics plays a dominant role in the future increase of ARs, wind changes associated with the midlatitude jet shifts also significantly contribute to AR changes, resulting in dipole change patterns in all seasons. In the North Atlantic, the model-projected jet shifts are strongly correlated with the simulated historical jet position. As models exhibit predominantly equatorward biases in the historical jet position, the large poleward jet shifts reduce AR days south of the historical mean jet position through the dynamical connections between the jet positions and AR days. Using the observed historical jet position as an emergent constraint, dynamical effects further increase future AR days over the equatorward flank above the increases from thermodynamical effects. Compared to the present, both total and extreme precipitation induced by ARs in the future contribute more to the seasonal mean and extreme precipitation, primarily because of the increase in AR frequency. While AR precipitation intensity generally increases more relative to the increase in integrated vapor transport, AR extreme precipitation intensity increases much less. © 2016 American Meteorological Society.


Murdachaew G.,Pacific Northwest National Laboratory | Mundy C.J.,Pacific Northwest National Laboratory | Schenter G.K.,Pacific Northwest National Laboratory
Journal of Chemical Physics | Year: 2010

We applied the self-consistent polarization density functional theory (SCP-DFT) to water. SCP-DFT requires only minimal parametrization, self-consistently includes the dispersion interaction neglected by standard DFT functionals, and has a cost similar to standard DFT despite its improved performance. Compared to the DFT functionals BLYP and BLYP-D (where the latter contains a simple dispersion correction), SCP-DFT yields interaction energies per molecule and harmonic frequencies of clusters in better agreement with experiment, with errors in the former of only a few tenths of a kcal/mol. BLYP and BLYP-D underbind and overbind the clusters, respectively, by up to about 1 kcal/mol. For liquid water, both BLYP and SCP-DFT predict radial distribution functions that are similar and overstructured compared to experiment. However, SCP-DFT improves over BLYP in predicting the experimental enthalpy of vaporization. A decomposition of the dimer interaction energy attempts to rationalize the performance of SCP-DFT. The SCP-DFT approach holds promise as an efficient and accurate method for describing large hydrogen-bonded systems, and has the potential to model complex systems with minimal parametrization. © 2010 American Institute of Physics.


Cain J.P.,University of Southern California | Gassman P.L.,Pacific Northwest National Laboratory | Wang H.,University of Southern California | Laskin A.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2010

Previous studies suggest that soot formed in premixed flat flames can contain a substantial amount of aliphatic compounds. Presence of these compounds may affect the kinetics of soot mass growth and oxidation in a way that is currently not understood. Using an infrared spectrometer coupled to a microscope (micro-FTIR), we examined the composition of soot sampled from a set of ethylene-argon-oxygen flames recently characterized (A. D. Abid, et al. Combust. Flame, 2008, 154, 775-788), all with an equivalence ratio = 2.07 but varying in maximum flame temperatures. Soot was sampled at three distances above the burner surface using a probe sampling technique and deposited on silicon nitride thin film substrates using a cascade impactor. Spectra were taken and analyses performed for samples collected on the lowest five impactor stages with the cut-off sizes of D50 = 10, 18, 32, 56 and 100 nm. The micro-FTIR spectra revealed the presence of aliphatic C-H, aromatic C-H and various oxygenated functional groups, including carbonyl (CO), C-O-C and C-OH groups. Spectral analyses were made to examine variations of these functional groups with flame temperature, sampling position and particle size. Results indicate that increases in flame temperature leads to higher contents of non-aromatic functionalities. Functional group concentrations were found to be ordered as follows: [CO] < [C-O] < [aliphatic C-H]. Aliphatic C-H was found to exist in significant quantities, with very little oxygenated groups present. The ratio of these chemical functionalities to aromatic C-H remains constant for particle sizes spanning 10-100 nm. The results confirm a previous experimental finding: a significant amount of aliphatic compounds is present in nascent soot formed in the flames studied, especially towards larger distances above the burner surface. © 2010 The Owner Societies.


El-Khoury P.Z.,Pacific Northwest National Laboratory | Hess W.P.,Pacific Northwest National Laboratory
Chemical Physics Letters | Year: 2013

Using tools of density functional theory, we compute the Raman spectra of 1,3-propanedithiol (PDT) isolated in the gas phase, solvated in methanol, tethered either to the face or vertex of a tetrahedral Ag20 cluster, and bridging two Ag20 clusters. The derived molecular polarizability derivative tensors are used to simulate molecular orientation-dependent Raman scattering, achieved by rotating the polarizability tensors relative to vector components of the incident/scattered radiation fields. Our framework is weighed against SERS experiments which probe the optical response at a hotspot formed by an Ag surface coated with PDT and a single 60-nm Ag nanosphere. © 2013 Elsevier B.V. All rights reserved.


Rucker D.F.,HydroGEOPHYSICS Inc. | Noonan G.E.,HydroGEOPHYSICS Inc. | Greenwood W.J.,Pacific Northwest National Laboratory
Engineering Geology | Year: 2011

Dredging and widening of the Panama Canal is currently being conducted to allow larger vessels to transit to and from the Americas, Asia, and Europe. Dredging efficiency relies heavily on knowledge of the types and volumes of sediments and rocks beneath the waterway to ensure the right equipment is used for their removal. To aid this process, a waterborne streaming electrical resistivity survey was conducted along the entire length of the canal to provide information on its geology. Within the confines of the canal, a total of 663 line-kilometers of electrical resistivity data were acquired using the dipole-dipole array. The support of the survey data for dredging activities was realized by calibrating and qualitatively correlating the resistivity data with information obtained from nearby logged boreholes and geological maps. The continuity of specific strata was determined in the resistivity sections by evaluating the continuity of similar ranges of resistivity values between boreholes. It was evident that differing geological units and successions can have similar ranges of resistivity values. For example, Quaternary sandy and gravelly alluvial fill from the former river channel of the Chagres River had similar resistivity ranges (generally from 40 to 250. Ω m) to those characteristic of late Miocene basalt dikes (from 100 to 400 Ω m), but for quite different reasons. Similarly, competent marine-based sedimentary rocks of the Caimito Formation were similar in resistivity values (ranging from 0.7 to 10 Ω m) to sandstone conglomerate of the Bohio Formation. Consequently, it would be difficult to use the resistivity data alone to extrapolate more complex geotechnical parameters, such as the hardness or strength of the substrate. A necessary component for such analyses requires detailed objective information regarding the specific context from which the geotechnical parameters were derived. If these data from cored boreholes and detailed geological surveys are taken into account, however, then waterborne streaming resistivity surveying can be a powerful tool. In this case, it provided inexpensive and highly resolved quantitative information on the potential volume of loose suctionable material along the Gamboa Sub-reach, which could enable large cost savings to be made on a major engineering project involving modification of one of the most important navigable waterways in the world. © 2010 Elsevier B.V.


Kerisit S.,Pacific Northwest National Laboratory | Schwenzer B.,Pacific Northwest National Laboratory | Vijayakumar M.,Pacific Northwest National Laboratory
Journal of Physical Chemistry Letters | Year: 2014

Molecular dynamics (MD) simulations of the interface between graphene and the ionic liquid 1-butyl-3-methylimidazolium trifluoromethanesulfonate (BMIM OTf) were carried out to gain molecular-level insights into the performance of graphene-based supercapacitors and, in particular, determine the effects of the presence of oxygen-containing defects at the graphene surface on their integral capacitance. The MD simulations predict that increasing the surface coverage of hydroxyl groups negatively affects the integral capacitance, whereas the effect of the presence of epoxy groups is much less significant. The calculated variations in capacitance are found to be directly correlated to the interfacial structure. Indeed, hydrogen bonding between hydroxyl groups and SO3 moieties prevents BMIM+ and OTf- ions from interacting favorably in the interfacial layer and restrains the orientation and mobility of OTf- ions, thereby reducing the interfacial permittivity of the ionic liquid. The results of the simulations can facilitate the rational design of electrode materials for supercapacitors. © 2014 American Chemical Society.


Zehr R.T.,Pacific Northwest National Laboratory | Henderson M.A.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2010

The ultraviolet (UV) photon-induced decomposition of hexafluoroacetone ((CF3)2CO; HFA) adsorbed on the rutile TiO 2(110) surface was investigated using photon stimulated desorption (PSD) and temperature programmed desorption (TPD). HFA adsorbs both molecularly and dissociatively on the reduced TiO2(110) surface. The initial ∼0.2 ML (where 1 ML equates to the cation site density of the ideal surface) coverage of HFA thermally decomposes resulting in the formation of adsorbed trifluoroacetate groups, with further HFA exposure resulting in molecular adsorption. No evidence was found for HFA photochemistry on the reduced surface. HFA adsorbed and desorbed molecularly on a pre-oxidized TiO2(110) surface with only a minor amount (∼1%) of thermal decomposition in TPD. A new adsorption state at 350 K was assigned to the reversible formation of a photoactive HFA-diolate species [(CF3)2COO]. UV irradiation depleted the 350 K state, resulting in evolution of CF3, CO, and CO2 in the gas phase and formation of surface bound trifluoroacetate groups. 18O isotope scrambling experiments showed that the ejected CO2 was from photodecomposition of the HFA-diolate species while the CO photoproduct was not. These results are in contrast to the photochemical behavior of acetone, butanone and acetaldehyde on TiO 2(110), where UV irradiation resulted in the gas phase ejection of one of the carbonyl substituent groups as well as a stoichiometric amount of carboxylate left on the surface. We conclude that fluorination alters the electronic structure of adsorbed carbonyls on TiO2(110) in such a way as to promote complete fragmentation of the adsorbed carbonyl complex to form gas phase CO2 as well as to open up additional photodissociation pathways leading to CO production. © the Owner Societies 2010.


Adams S.M.,Oak Ridge National Laboratory | Ham K.D.,Pacific Northwest National Laboratory
Environmental Management | Year: 2011

Recovery dynamics in a previously disturbed stream were investigated to determine the influence of a series of remedial actions on stream recovery and to evaluate the potential application of bioindicators as an environmental management tool. A suite of bioindicators, representing five different functional response groups, were measured annually for a sentinel fish species over a 15 year period during which a variety of remedial and pollution abatement actions were implemented. Trends in biochemical, physiological, condition, growth, bioenergetic, and nutritional responses demonstrated that the health status of a sentinel fish species in the disturbed stream approached that of fish in the reference stream by the end of the study. Two major remedial actions, dechlorination and water flow management, had large effects on stream recovery resulting in an improvement in the bioenergetic, disease, nutritional, and organ condition status of the sentinel fish species. A subset of bioindicators responded rather dramatically to temporal trends affecting all sites, but some indicators showed little response to disturbance or to restoration activities. In assessing recovery of aquatic systems, application of appropriate integrative structural indices along with a variety of sensitive functional bioindicators should be used to understand the mechanistic basis of stress and recovery and to reduce the risk of false positives. Understanding the mechanistic processes involved between stressors, stress responses of biota, and the recovery dynamics of aquatic systems reduces the uncertainty involved in environmental management and regulatory decisions resulting in an increased ability to predict the consequences of restoration and remedial actions for aquatic systems. © 2011 Springer Science+Business Media, LLC.


Wick C.D.,Louisiana Tech University | Dang L.X.,Pacific Northwest National Laboratory
Journal of Chemical Physics | Year: 2010

Molecular dynamics simulations with a polarizable multistate empirical valence-bond model were carried out to investigate NaOH dissociation and pairing in water bulk and at the air-water interface. It was found that NaOH readily dissociates in the bulk and the effect of the air-water interface on NaOH dissociation is fairly minor. Also, NaOH complexes were found to be strongly repelled from the air-water interface, which is consistent with surface tension measurements. At the same time, a very strong preference for the hydroxide anion to be oriented toward the air was found that persisted a few angstroms toward the liquid from the Gibbs dividing surface of the air-water interface. This was due to a preference for the hydroxide anion to have its hydrogen pointing toward the air and the fact that the sodium ion was more likely to be found near the hydroxide oxygen than hydrogen. As a consequence, the simulation results show that surfaces of NaOH solutions should be negatively charged, in agreement with experimental observations, but also that the hydroxide has little surface affinity. This provides the possibility that the surface of water can be devoid of hydroxide anions, but still have a strong negative charge. © 2010 American Institute of Physics.


Wick C.D.,Louisiana Tech University | Dang L.X.,Pacific Northwest National Laboratory
Journal of Chemical Physics | Year: 2010

NaCl pairing and dissociation was investigated at the CCl4 -water and 1,2-dichloroethane (DCE)-water interfaces, and compared with dissociation results in the bulk and at the air-water interface utilizing polarizable potentials. The transition path sampling methodology was used to calculate the rate constant for dissociation, while umbrella sampling was used to map out a free energy profile for NaCl dissociation. The results found that ion pairing was weakest at the organic-water interfaces, even weaker than in the water bulk. This is in contrast to what has been observed previously for the air-water interface, in which NaCl ion paring is stronger than in the bulk [C. D. Wick, J. Phys. Chem. C 113, 6356 (2009)]. A consequence of the weaker binding at the organic-water interfaces was that ion dissociation was faster than in the other systems studied. Interactions of the organic phase with the ions influenced the magnitude of the Cl- induced dipole moment, and at the organic-water interfaces, the average Cl- induced dipole was found to be lower than at the air-water interface, weakening interactions with Na +. These weaker interactions were found to be responsible for the weaker ion pairing found at the organic-water interfaces. © 2010 American Institute of Physics.


Liu C.,Washington State University | Wang H.,Pacific Northwest National Laboratory | Karim A.M.,Pacific Northwest National Laboratory | Sun J.,Washington State University | And 2 more authors.
Chemical Society Reviews | Year: 2014

Increasing energy demand, especially in the transportation sector, and soaring CO2 emissions necessitate the exploitation of renewable sources of energy. Despite the large variety of new energy carriers, liquid hydrocarbon still appears to be the most attractive and feasible form of transportation fuel taking into account the energy density, stability and existing infrastructure. Biomass is an abundant, renewable source of energy; however, utilizing it in a cost-effective way is still a substantial challenge. Lignocellulose is composed of three major biopolymers, namely cellulose, hemicellulose and lignin. Fast pyrolysis of biomass is recognized as an efficient and feasible process to selectively convert lignocellulose into a liquid fuel - bio-oil. However bio-oil from fast pyrolysis contains a large amount of oxygen, distributed in hundreds of oxygenates. These oxygenates are the cause of many negative properties, such as low heating value, high corrosiveness, high viscosity, and instability; they also greatly limit the application of bio-oil particularly as transportation fuel. Hydrocarbons derived from biomass are most attractive because of their high energy density and compatibility with the existing infrastructure. Thus, converting lignocellulose into transportation fuels via catalytic fast pyrolysis has attracted much attention. Many studies related to catalytic fast pyrolysis of biomass have been published. The main challenge of this process is the development of active and stable catalysts that can deal with a large variety of decomposition intermediates from lignocellulose. This review starts with the current understanding of the chemistry in fast pyrolysis of lignocellulose and focuses on the development of catalysts in catalytic fast pyrolysis. Recent progress in the experimental studies on catalytic fast pyrolysis of biomass is also summarized with the emphasis on bio-oil yields and quality. © the Partner Organisations 2014.


Zhu Y.,Pacific Northwest National Laboratory | Biddy M.J.,National Renewable Energy Laboratory | Jones S.B.,Pacific Northwest National Laboratory | Elliott D.C.,Pacific Northwest National Laboratory | Schmidt A.J.,Pacific Northwest National Laboratory
Applied Energy | Year: 2014

Techno-economic analysis (TEA) was implemented to evaluate the feasibility of developing a commercial large-scale woody biomass HTL and upgrading plant. In this system, woody biomass at 2000. dry metric ton/day was assumed to be converted to bio-oil via HTL and further upgraded to produce liquid fuel. Two cases were evaluated: a state-of-technology (SOT) case with HTL experimental testing results underpinning the major design basis and a goal case considering future improvements for a commercial plant with mature technologies. Process simulation and cost analysis were conducted. The annual production rate for the final hydrocarbon product was estimated to be 42.9 and 69.9. million gallon gasoline-equivalent (GGE) for the SOT and goal cases, respectively. The minimum fuel selling price (MFSP) was estimated to be $4.44/GGE for the SOT case and $2.52/GGE for the goal case. For advancing from the SOT to the goal case, the assumption of reducing the organics loss to the water phase led to the largest reduction in the production cost. Alternative configuration of small scale distributed HTL plants was evaluated. Sensitivity analysis identified key factors affecting the goal case and its cost uncertainties resulting from the assumed uncertainties in selected parameters. © 2013 Elsevier Ltd.


Wiese S.,Pacific Northwest National Laboratory | Kilgore U.J.,Pacific Northwest National Laboratory | Dubois D.L.,Pacific Northwest National Laboratory | Bullock R.M.,Pacific Northwest National Laboratory
ACS Catalysis | Year: 2012

A nickel(II) bis(diphosphine) complex, [Ni(P Me 2N Ph 2) 2](BF 4) 2 (P Me 2N Ph 2 = 1,5-diphenyl-3,7-dimethyl- 1,5-diaza-3,7-diphosphacyclooctane), has been synthesized and characterized. This complex, which contains pendant amines in the diphosphine ligand, is an electrocatalyst for hydrogen production by proton reduction. Using [(DMF)H]OTf as the acid, a turnover frequency of 1,540 s -1 was obtained with no added water, and a turnover frequency of 6,700 s -1 was found with 1.0 M water added. Thermochemical studies show that the hydride donor ability of [HNi(P Me 2N Ph 2) 2](BF 4) is δG° H- = 54.0 kcal/mol, and we estimate a driving force for H 2 elimination of 13.8 kcal/mol. [Ni(P Me 2N Ph 2) 2](BF 4) 2 is the fastest H 2 production catalyst in the [Ni(P R 2N R' 2) 2] (BF 4) 2 family of complexes. © 2012 American Chemical Society.


We rely on a hierarchical approach to identify the low-lying isomers and corresponding global minima of the pentagonal dodecahedron (H 2O) 20 and the H 3O +(H 2O) 20 nanoclusters. Initial screening of the isomers is performed using classical interaction potentials, namely the Transferable Interaction 4-site Potential (TIP4P), the Thole-Type Flexible Model, versions 2.0 (TTM2-F) and 2.1 (TTM2.1-F) for (H 2O) 20 and the Anisotropic Site Potential (ASP) for H 3O +(H 2O) 20. The nano-networks obtained with those potentials were subsequently refined at the density functional theory (DFT) with the Becke-3-parameter Lee-Yang-Parr (B3LYP) functional and at the second order Møller-Plesset perturbation (MP2) levels of theory. For the pentagonal dodecahedron (H 2O) 20 it was found that DFT (B3LYP) and MP2 produced the same global minimum. However, this was not the case for the H 3O +(H 2O) 20 cluster, for which MP2 produced a different network for the global minimum when compared to DFT (B3LYP). The low-lying networks of H 3O +(H 2O) 20 correspond to structures having 9 'free' OH bonds and the hydronium ion on the surface of the nanocluster. The IR spectra of the various networks are further analysed in the OH stretching ('fingerprint') region and the various bands are assigned to structural arrangements of the underlying hydrogen bonding network. © 2012 Canadian Society for Chemical Engineering.


Fu N.,Pacific Northwest National Laboratory | Xiong Y.,Pacific Northwest National Laboratory | Squier T.C.,Pacific Northwest National Laboratory
Journal of the American Chemical Society | Year: 2012

Photoswitchable fluorescent probes capable of the targeted labeling of tagged proteins are of significant interest due to their ability to enable in situ imaging of protein complexes within native biomolecular assemblies. Here we describe the synthesis of a fluorescent probe (AsCy3Cy5) and demonstrate the targeted labeling and super-resolution imaging of a tagged protein within a supramolecular protein complex. © 2012 American Chemical Society.


Shokri A.,University of Minnesota | Schmidt J.,University of Minnesota | Wang X.-B.,Pacific Northwest National Laboratory | Kass S.R.,University of Minnesota
Journal of the American Chemical Society | Year: 2012

Nature employs flexible molecules to bind anions in a variety of physiologically important processes whereas supramolecular chemists have been designing rigid substrates that minimize or eliminate intramolecular hydrogen bond interactions to carry out anion recognition. Herein, the association of a flexible polyhydroxy alkane with chloride ion is described and the bound receptor is characterized by infrared and photoelectron spectroscopy in the gas phase, computations, and its binding constant as a function of temperature in acetonitrile. © 2012 American Chemical Society.


Ferguson R.M.,University of Washington | Minard K.R.,Pacific Northwest National Laboratory | Khandhar A.P.,University of Washington | Krishnan K.M.,University of Washington
Medical Physics | Year: 2011

Purpose: Magnetic particle imaging (MPI), using magnetite nanoparticles (MNPs) as tracer material, shows great promise as a platform for fast tomographic imaging. To date, the magnetic properties of MNPs used in imaging have not been optimized. As nanoparticle magnetism shows strong size dependence, the authors explore how varying MNP size impacts imaging performance in order to determine optimal MNP characteristics for MPI at any driving field frequency f 0. Methods: Monodisperse MNPs of varying size were synthesized and their magnetic properties characterized. Their MPI response was measured experimentally using a custom-built MPI transceiver designed to detect the third harmonic of MNP magnetization. The driving field amplitude H 0 =6 mT μ 0 -1 and frequency f 0 =250 kHz were chosen to be suitable for imaging small animals. Experimental results were interpreted using a model of dynamic MNP magnetization that is based on the Langevin theory of superparamagnetism and accounts for sample size distribution and size-dependent magnetic relaxation. Results: The experimental results show a clear variation in the MPI signal intensity as a function of MNP diameter that is in agreement with simulated results. A maximum in the plot of MPI signal vs MNP size indicates there is a particular size that is optimal for the chosen f 0. Conclusions: The authors observed that MNPs 15 nm in diameter generate maximum signal amplitude in MPI experiments at 250 kHz. The authors expect the physical basis for this result, the change in magnetic relaxation with MNP size, will impact MPI under other experimental conditions. © 2011 American Association of Physicists in Medicine.


Bond L.J.,Pacific Northwest National Laboratory
American Society of Mechanical Engineers, Pressure Vessels and Piping Division (Publication) PVP | Year: 2010

There is growing interest in life extensions to enable longer term operation (LTO) for both existing nuclear power plants (NPPs) and proposed new NPPs. In order to justify an initial license extension for the 40-60 year period, new non-destructive examination (NDE) approaches have been developed and deployed by NPP operators in their Aging Management Programs (AMPs). However, to achieve the goals of even longer term operation, and specifically for the USA in looking at methodologies to support subsequent license renewal periods (i.e., 60-80 years, and beyond), it is necessary to understand the capabilities of current NDE methods to detect, monitor and trend degradation and hence enable timely implementation of appropriate corrective actions. This paper discusses insights from past experience, the state-of-the-art, and current activities in the move towards providing a capacity for proactive management of materials degradation (PMMD) to support NPP LTO. © 2010 by ASME.


In the event of an accidental or intentional release of radionuclides into a populated area, massive numbers of people may require radiobioassay screening as triage for dose-reduction therapy or identification for longer-term follow-up. If the event released significant levels of beta- or alpha-emitting radionuclides, in vivo assays would be ineffective. Therefore, highly efficient and rapid analytical methods for radionuclide detection from submitted spot urine samples (≤50 mL) would be required. At present, the quantitative determination of alpha-emitting radionuclides from urine samples is highly labor intensive and requires significant time to prepare and analyze samples. Sorbent materials that provide effective collection and enable rapid assay could significantly streamline the radioanalytical process. The authors have demonstrated the use of magnetic nanoparticles as a novel method of extracting media for four alpha-emitting radionuclides of concern (polonium, radium, uranium and americium) from chemically-unmodified and pH-2 human urine. Herein, the initial experimental sorption results are presented along with a novel method that uses magnetic nanoparticles to extract radionuclides from unmodified human urine and then collect the magnetic field-induced particles for subsequent alpha-counting-source preparation. Additionally, a versatile human dose model is constructed that determines the detector count times required to estimate dose at specific protective-action thresholds. The model provides a means to assess a method's detection capabilities and uses fundamental health physics parameters and actual experimental data as core variables. The modeling shows that, with effective sorbent materials, rapid screening for alpha-emitters is possible with a 50-mL urine sample collected within 1 wk of exposure/intake. © Copyright 2011 Health Physics Society.


Vaughey J.T.,Argonne National Laboratory | Liu G.,Lawrence Berkeley National Laboratory | Zhang J.-G.,Pacific Northwest National Laboratory
MRS Bulletin | Year: 2014

The success of high capacity energy storage systems based on lithium (Li) batteries relies on the realization of the promise of Li-metal anodes. Li metal has many advantageous properties, including an extremely high theoretical specific capacity (3860 mAh g-1), the lowest electrochemical potential (-3.040 V versus standard hydrogen electrode), and low density (0.59 g cm-3), which, all together, make it a very desirable electrode for energy storage devices. However, while primary Li batteries are used for numerous commercial applications, rechargeable Li-metal batteries that utilize Li-metal anodes have not been as successful. This article discusses the properties of Li metal in the absence of surface stabilization, as well as three different approaches currently under investigation for stabilizing the surface of Li metal to control its reactivity within the electrochemical environment of a Li-based battery. © 2014 Materials Research Society.


Croy J.R.,Argonne National Laboratory | Abouimrane A.,Argonne National Laboratory | Zhang Z.,Pacific Northwest National Laboratory
MRS Bulletin | Year: 2014

The commercialization of lithium-ion batteries has intimately changed our lives and enabled portable electronic devices, which has revolutionized communications, entertainment, medicine, and more. After three decades of commercial development, researchers around the world are now pursuing major advances that would allow this technology to power the next generation of light-duty, electric, and hybrid-electric vehicles. If this goal is to be met, concerted advances in safety and cost, as well as cycle-life and energy densities, must be realized through advances in the properties of the highly correlated, but separate, components of lithium-ion energy-storage systems. © 2014 Materials Research Society.


Wiedner E.S.,Pacific Northwest National Laboratory | Appel A.M.,Pacific Northwest National Laboratory
Journal of the American Chemical Society | Year: 2014

To gain insight into thermodynamic barriers for reduction of CO into CH3OH, free energies for reduction of [CpRe(PPh3)(NO)(CO)] + into CpRe(PPh3)(NO)(CH2OH) have been determined from experimental measurements. Using model complexes, the free energies for the transfer of H+, H-, and e- have been determined. A pKa of 10.6 was estimated for [CpRe(PPh 3)(NO)(CHOH)]+ by measuring the pKa for the analogous [CpRe(PPh3)(NO)(CMeOH)]+. The hydride donor ability (δG°H -) of CpRe(PPh3)(NO) (CH2OH) was estimated to be 58.0 kcal mol-1, based on calorimetry measurements of the hydride-transfer reaction between CpRe(PPh 3)(NO)(CHO) and [CpRe(PPh3)(NO)(CHOMe)]+ to generate the methylated analogue, CpRe(PPh3)(NO)(CH2OMe). Cyclic voltammograms recorded on CpRe(PPh3)(NO)(CMeO), CpRe(PPh 3)(NO)(CH2OMe), and [CpRe(PPh3)(NO)(CHOMe)] + displayed either a quasireversible oxidation (neutral species) or reduction (cationic species). These potentials were used as estimates for the oxidation of CpRe(PPh3)(NO)(CHO) or CpRe(PPh3)(NO)(CH 2OH) or the reduction of [CpRe(PPh3)(NO)(CHOH)] +. Combination of the thermodynamic data permits construction of three-dimensional free energy landscapes under varying conditions of pH and PH2. The free energy for H2 addition (δG°H 2) to [CpRe(PPh3)(NO)(CO)]+ (+15 kcal mol -1) was identified as the most significant thermodynamic impediment for the reduction of CO. DFT computations on a series of [CpXM(L)(NO) (CO)]+ (M = Re, Mn) complexes indicate that δG°H 2 can be varied by 11 kcal mol-1 through variation of both the ancillary ligands and the metal. © 2014 American Chemical Society.


Spurgeon S.R.,Pacific Northwest National Laboratory
MRS Bulletin | Year: 2014

The University of Cincinnati develops many paper-based projects as part of an innovative laboratory class that bridges the gap between science and engineering education. Andrew Steckl, Gieringer Professor and Ohio Eminent Scholar at the University of Cincinnati's Nanoelectronics Laboratory have developed a special topics class for incoming graduate students to design. The students were offered project suggestions, but would otherwise are free to choose their own ideas, vetted by in-class discussions. One group studied the design of smart diapers that can characterize waste products and record biometrics. Prajokta Ray, a PhD student, designed a flexible, urine powered electrochemical battery to provide power for various sensors in the diaper. In one case, a student attempted to design an adhesive-backed portable weather sensor to detect local temperature and humidity. Steckl believes that a course in which students are required to set goals, adhere to a strict budget, and deliver a final product can prepare them for the demands imposed on scientists today.


Srinivasan R.,Texas A&M University | Zhang X.,Pacific Northwest National Laboratory | Arnold J.,U.S. Department of Agriculture
Transactions of the ASABE | Year: 2010

Physically based, distributed hydrologic models are increasingly used in assessments of water resources, best management practices, and climate and land use changes. Model performance evaluation in ungauged basins is an important research topic. In this study, we propose a framework for developing Soil and Water Assessment Tool (SWAT) input data, including hydrography, terrain, land use, soil, tile, weather, and management practices, for the Upper Mississippi River basin (UMRB). We also present a performance evaluation of SWAT hydrologic budget and crop yield simulations in the UMRB without calibration. The uncalibrated SWAT model ably predicts annual streamflow at 11 USGS gauges and crop yield at a four-digit hydrologic unit code (HUC) scale. For monthly streamflow simulation, the performance of SWAT is marginally poor compared with that of annual flow, which may be due to incomplete information about reservoirs and dams within the UMRB. Further validation shows that SWAT can predict base flow contribution ratio reasonably well. Compared with three calibrated SWAT models developed in previous studies of the entire UMRB, the uncalibrated SWAT model presented here can provide similar results. Overall, the SWAT model can provide satisfactory predictions on hydrologic budget and crop yield in the UMRB without calibration. The results emphasize the importance and prospects of using accurate spatial input data for the physically based SWAT model. This study also examines biofuel-biomass production by simulating all agricultural lands with switchgrass, producing satisfactory results in estimating biomass availability for biofuel production. © 2010 American Society of Agricultural and Biological Engineers.


Zhang Z.F.,Pacific Northwest National Laboratory
Soil Science Society of America Journal | Year: 2010

Accurate simulation and prediction of flow and transport of solutes in a heterogeneous vadose zone requires the appropriate hydraulic properties corresponding to the spatial scale of interest Upscaling techniques are needed to provide effective properties for describing the vadose zone system's behavior with information collected at a much smaller scale. Numerical experiments were performed to investigate the efective unsaturated hydraulic conductivity of soils with different degrees and dimensionalities of heterogeneity. Researchers have extended Marheron's method for determining the hydraulic conductivity of soils with one-dimensional heterogeneity under a saturated condition to unsaturated conditions. In this work, Marheron's method was further extended to the unsaturated soils with two- and three-dimensional heterogeneity. It was found that the first-order approximation of the extended formula is similar to those based on the small-perturbation approach. The extended Matheron's method was verified using multistep numerical experiments of gravity-induced flow into synthetic soils with different degrees of heterogeneity. Results showed that the dimensionality of soil heterogeneity has a significant impact on the effective unsaturated hydraulic conductivity, and the extended Marheron's method can well estimate the effective conductivity of the soils with multidimensional heterogeneity. © Soil Science Society of America, 5585 Guilford Rd., Madison WI 53711 USA All rights reserved.


Alexandrov V.,Pacific Northwest National Laboratory | Rosso K.M.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2015

Goethite (α-FeOOH) surfaces represent one of the most ubiquitous redox-active interfaces in the environment, playing an important role in biogeochemical metal cycling and contaminant residence in the subsurface. Fe(II)-catalyzed recrystallization of goethite is a fundamental process in this context, but the proposed Fe(ii)aq-Fe(III)goethite electron and iron atom exchange mechanism of recrystallization remains poorly understood at the atomic level. We examine the adsorption of aqueous Fe(II) and subsequent interfacial electron transfer (ET) between adsorbed Fe(II) and structural Fe(III) at the (110) and (021) goethite surfaces using density functional theory calculations including Hubbard U corrections (DFT + U) aided by ab initio molecular dynamics simulations. We investigate various surface sites for the adsorption of Fe2+(H2O)6 in different coordination environments. Calculated energies for adsorbed complexes at both surfaces favor monodentate complexes with reduced 4- and 5-fold coordination over higher-dentate structures and 6-fold coordination. The hydrolysis of H2O ligands is observed for some pre-ET adsorbed Fe(ii) configurations. ET from the adsorbed Fe(ii) into the goethite lattice is calculated to be energetically uphill always, but simultaneous proton transfer from H2O ligands of the adsorbed complexes to the surface oxygen species stabilizes post-ET states. We find that surface defects such as oxygen vacancies near the adsorption site also can stabilize post-ET states, enabling the Fe(ii)aq-Fe(III)goethite interfacial electron transfer reaction implied from experiments to proceed. © 2015 the Owner Societies.


Nguyen B.N.,Pacific Northwest National Laboratory | Henager Jr. C.H.,Pacific Northwest National Laboratory
Journal of Nuclear Materials | Year: 2013

Abstract SiC/SiC composites used in fusion reactor applications are subjected to high heat fluxes and require knowledge and tailoring of their in-service thermal conductivity. Accurately predicting the thermal conductivity of SiC/SiC composites as a function of temperature will guide the design of these materials for their intended use, which will eventually include the effects of 14-MeV neutron irradiations. This paper applies an Eshelby-Mori-Tanaka approach (EMTA) to compute the thermal conductivity of unirradiated SiC/SiC composites. The homogenization procedure includes three steps. In the first step EMTA computes the homogenized thermal conductivity of the unidirectional (UD) SiC fiber embraced by its coating layer. The second step computes the thermal conductivity of the UD composite formed by the equivalent SiC fibers embedded in a SiC matrix, and finally the thermal conductivity of the as-formed SiC/SiC composite is obtained by averaging the solution for the UD composite over all possible fiber orientations using the second-order fiber orientation tensor. The EMTA predictions for the transverse thermal conductivity of several types of SiC/SiC composites with different fiber types and interfaces are compared to the predicted and experimental results by Youngblood et al. [J. Nucl. Mater. 307-311 (2002) 1120-1125, Fusion Sci. Technol. 45 (2004) 583-591, Compos. Sci. Technol. 62 (2002) 1127-1139.] © 2013 Elsevier B.V. All rights reserved.


Szanyi J.,Pacific Northwest National Laboratory | Kwak J.H.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2014

The adsorption of CO2 and CO was investigated on a pure γ-Al2O3 support material that has been used in Pd and Ru catalysts for the reduction of CO2. The adsorption of CO 2 resulted in the formation of carbonates, bicarbonates and linearly adsorbed CO2 species. The amount and the nature of the adsorbed species were dependent on the annealing temperature of the alumina support. On γ-Al2O3 annealed at 473 K mostly bicarbonates formed, while no adsorbed CO2 was seen on this highly hydroxylated surface. With increasing calcination temperature the amount of both surface carbonates and linearly adsorbed CO2 increased, but still the most abundant surface species were bicarbonates. Surface carbonates and adsorbed CO2 can readily be removed from the alumina surface, while bicarbonates are stable to elevated temperatures. The interaction of CO with γ-Al2O3 is much weaker than that of CO2. At room temperature CO adsorbs only on Lewis acid sites, and can be readily removed by evacuation. At 100 K CO can probe different defect sites on the alumina surface. Under no conditions we have observed the formation of any carbonates or bicarbonates upon the interaction of CO with the pure alumina support. In co-adsorption experiments CO competes for adsorption sites with the linearly adsorbed CO2 on the 773 K-annealed γ-Al 2O3 surface, but it does not result in the desorption of CO2, rather in the increased production of weakly held carbonates. After the removal of adsorbed CO, CO2 moves back to its original adsorption sites, i.e., Lewis acidic Al3+ centers. The exposure of a CO2-saturated γ-Al2O3 to H2O did not affect any of the adsorbed surface species. The findings of this study will be used to rationalize the results of our ongoing in situ and in operando studies on the reduction of CO2 on supported Pd and Ru catalysts. This journal is © the Partner Organisations 2014.


Szanyi J.,Pacific Northwest National Laboratory | Kwak J.H.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2014

Alumina supported Pd catalysts with metal loadings of 0.5, 2.5 and 10 wt% were investigated by in situ FTIR spectroscopy in order to understand the nature of adsorbed species formed during their exposure to CO2 and CO. Exposing the annealed samples to CO2 at 295 K resulted in the formation of alumina support-bound surface species only: linear adsorbed CO 2, bidentate carbonates and bicarbonates. Room temperature exposure of all three samples to CO produced IR features characteristic of both ionic and metallic Pd, as well as bands we observed upon CO2 adsorption (alumina support-bound species). Low temperature (100 K) adsorption of CO on the three samples provided information about the state of Pd after oxidation and reduction. Oxidized samples contained exclusively ionic Pd, while mostly metallic Pd was present in the reduced samples. Subsequent annealing of the CO-saturated samples revealed the facile (low temperature) reduction of PdO x species by adsorbed CO. This process was evidenced by the variations in IR bands characteristic of ionic and metallic Pd-bound CO, as well as by the appearance of IR bands associated with CO2 adsorption as a function of annealing temperature. Samples containing oxidized Pd species (oxidized, annealed or reduced) always produced CO2 upon their exposure to CO, while no CO2-related surface entities were observed on samples having only fully reduced (metallic) Pd. This journal is © the Partner Organisations 2014.


Shvartsburg A.A.,Pacific Northwest National Laboratory | Smith R.D.,Pacific Northwest National Laboratory
Analytical Chemistry | Year: 2012

Differential ion mobility spectrometry (FAIMS) integrated with mass spectrometry (MS) is a powerful new tool for biological and environmental analyses. Large proteins occupy regions of FAIMS spectra distinct from peptides, lipids, or other medium-size biomolecules, likely because strong electric fields align huge dipoles common to macroions. Here we confirm this phenomenon in separations of proteins at extreme fields using FAIMS chips coupled to MS and demonstrate their use to detect even minor amounts of large proteins in complex matrixes of smaller proteins and peptides. © 2012 American Chemical Society.


Shvartsburg A.A.,Pacific Northwest National Laboratory | Zheng Y.,Northwestern University | Smith R.D.,Pacific Northwest National Laboratory | Kelleher N.L.,Northwestern University
Analytical Chemistry | Year: 2012

Differential ion mobility spectrometry (field asymmetric waveform ion mobility spectrometry (FAIMS)) is emerging as a broadly useful tool for separation of isomeric modified peptides with post-translational modifications (PTMs) attached to alternative residues. Such separations were anticipated to become more challenging for smaller PTMs and longer peptides. Here, we show that FAIMS can fully resolve localization variants involving a PTM as minuscule as methylation, even for larger peptides in the middle-down range. © 2012 American Chemical Society.


Miliordos E.,Pacific Northwest National Laboratory | Xantheas S.S.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2014

Metal atoms typically have second and higher ionization potentials (IPs) that are larger than the IP of water, resulting in the Coulombic explosion of the first few [M(H2O)n]+q (q ≥ 2) gas phase clusters as the M+(q-1) + (H2O)n+ or MOH+(q-1) + H3O+(H2O)n-2 energy levels are energetically more stable than the Mq+ + (H 2O)n ones for small n. We present a theoretical analysis of the various electronic states arising from the sequential hydration of the Ca2+, Mg2+ and Al3+ cations with up to six water molecules. Our results quantify the relative shift of those electronic states with the degree of solvation, identify their complex interaction with other states arising from different dissociation channels and shed light on the mechanism behind the energetic stabilization of the multi-charged hydrated M+q(H2O)n complexes observed in aqueous solution with respect to the water ionization products. © 2014 the Partner Organisations.


Petrik N.G.,Pacific Northwest National Laboratory | Kimmel G.A.,Pacific Northwest National Laboratory
Physical Chemistry Chemical Physics | Year: 2014

Weakly-bound atoms and molecules (Ar, Kr, Xe, CO, CH4, CO 2, CH3OH, N2O, and N2) are used to probe the photochemical interactions of chemisorbed oxygen on rutile TiO 2(110). Ultraviolet irradiation of chemisorbed oxygen co-adsorbed with the probe species leads to photon-stimulated desorption (PSD) of some of the probe species (e.g. Kr and CH4), but not others (e.g. CO 2 and N2O). Without chemisorbed oxygen, the PSD yields of all the probe species are very low or not observed. Surprisingly, both chemisorbed O2 and oxygen adatoms, Oa, are photo-active for desorption of Kr and other weakly-bound species. To our knowledge, this is the first evidence for photo-activity of Oa on TiO2(110). The Kr PSD yield increases with increasing coverage of Kr and of chemisorbed oxygen. For Kr, the angular distribution of the photodesorbed atoms is approximately cosine. The Kr distribution is quite different from the angular distribution for the O2 PSD, which is sharply peaked along the surface normal. We propose that various forms of chemisorbed oxygen are excited by reactions with electrons and/or holes created in the TiO2 substrate by UV photon irradiation. The photo-excited oxygen collides with, and transfers energy to, neighboring co-adsorbed atoms or molecules. For co-adsorbates with a small enough binding energy to the substrate, desorption may result. The observed phenomenon provides a new tool for studying photochemical processes. © 2014 the Owner Societies.


Laskin J.,Pacific Northwest National Laboratory
International Journal of Mass Spectrometry | Year: 2015

This article presents a personal perspective regarding the development of key concepts in understanding hyperthermal collisions of polyatomic ions with surfaces as a unique tool for mass spectrometry applications. In particular, this article provides a historic overview of studies focused on understanding the phenomena underlying surface-induced dissociation (SID) and mass-selected deposition of complex ions on surfaces. Fast energy transfer in ion-surface collisions makes SID especially advantageous for structural characterization of large complex molecules, such as peptides, proteins, and protein complexes. Soft, dissociative, and reactive landing of mass-selected ions provide the basis for preparatory mass spectrometry. These techniques enable precisely controlled deposition of ions on surfaces for a variety of applications. This perspective article shows how basic concepts developed in the 1920s and 1970s have evolved to advance promising mass-spectrometry-based applications. © 2014 Elsevier B.V. All rights reserved.


Malone E.L.,Pacific Northwest National Laboratory | Engle N.L.,Pacific Northwest National Laboratory
Wiley Interdisciplinary Reviews: Climate Change | Year: 2011

As the emphasis in climate change research, international negotiations, and developing-country activities has shifted from mitigation to adaptation, vulnerability has emerged as a bridge between impacts on one side and the need for adaptive changes on the other. Still, the term vulnerability remains, its meaning changing with the scale, focus, and purpose of each assessment. Understanding regional vulnerability has advanced over the past several decades, with studies using a combination of indicators, case studies and analogs, stakeholder-driven processes, and scenario-building methodologies. As regions become increasingly relevant scales of inquiry for bridging the aggregate and local, for every analysis, it is perhaps most appropriate to ask three 'what' questions: 'What/who is vulnerable?', 'What is vulnerability?', and 'Vulnerable to what?' The answers to these questions will yield different definitions of vulnerability as well as different methods for assessing it. Vulnerability research that is primarily quantitative has been dominated by the use of indicators. Regional vulnerability research can also focus on or include stakeholder involvement processes, especially helpful in generating agreement on issues and understanding context-specific aspects of vulnerability and prospective adaptations. Creating scenarios is another increasingly useful method that helps researchers, policymakers, and other stakeholders think about the future and plan options for alternative futures. Current research efforts are emphasizing stakeholder involvement, with a dual focus on vulnerability assessment and social learning in the process; and scenario analyses, which may be expert-defined or stakeholder-driven. Both tend to span impacts, vulnerability, and adaptation decision making for a particular place and conditions. © 2011 John Wiley & Sons, Ltd.


Hoffert W.A.,Pacific Northwest National Laboratory | Roberts J.A.S.,Pacific Northwest National Laboratory | Morris Bullock R.,Pacific Northwest National Laboratory | Helm M.L.,Pacific Northwest National Laboratory
Chemical Communications | Year: 2013

We report a synthetic nickel complex containing proton relays, [Ni(P Ph 2NC6H4OH2)2](BF4) 2 (PPh 2NC6H4OH2 = 1,5-bis(p-hydroxyphenyl)-3,7-diphenyl-1,5-diaza-3,7-diphosphacyclo-octane), that catalyzes the production of H2 in aqueous acetonitrile with turnover frequencies of 750-170000 s-1 at experimentally determined overpotentials of 310-470 mV. © 2013 The Royal Society of Chemistry.


Zhang J.,University of North Carolina at Chapel Hill | Welch G.,University of Central Florida | Bishop G.,University of North Carolina at Chapel Hill | Huang Z.,Pacific Northwest National Laboratory
IEEE Transactions on Sustainable Energy | Year: 2014

As electricity demand continues to grow and renewable energy increases its penetration in the power grid, real-time state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by phasor measurement units (PMUs). In this paper, we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However, in practice, the process and measurement noise models are usually difficult to obtain. Thus, we have developed the adaptive Kalman filter with inflatable noise variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. The simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements. © 2010-2012 IEEE.


Romero-Gomez P.,Pacific Northwest National Laboratory | Richmond M.C.,Pacific Northwest National Laboratory
Renewable Energy | Year: 2014

The occurrence, frequency, and intensity of blade-strike of fish on an axial-flow marine hydrokinetic turbine was simulated using two modeling approaches: a novel scheme combining computational fluid dynamics (CFD) with Lagrangian particle tracking, and a conventional kinematic model. The kinematic model included simplifying assumptions of fish trajectories such as distribution and velocity. The proposed CFD and Lagrangian particle tracking methods provided a more realistic representation of blade-strike mechanisms by integrating the following components: (i) advanced unsteady turbulence simulation using detached eddy simulation (DES), (ii) generation of inflow turbulence based on field data, (iii) moving turbine blades in highly transient flows, and (iv) Lagrangian particles to mimic the potential fish pathways. The test conditions to evaluate the blade-strike probability and fish survival rate were: (i) the turbulence environment, (ii) the fish size, and (iii) the approaching flow velocity. The proposed Lagrangian method simulates potential fish trajectories and their interaction with the rotating turbine with the limitation that it does not include any volitional fish avoidance behavior. Depending upon the scenario, the percentage of particles that registered a collision event ranged from 6% to 19% of the released sample size. Next, by using a set of experimental correlations of the exposure-response for live fish colliding with moving blades, the simulated collision data were used as input variables to estimate the survival rate of fish passing through the operating turbine. The resulting survival rates were greater than 96% in all scenarios, which is comparable to or better than known survival rates for conventional hydropower turbines. The kinematic model predicted higher blade-strike probabilities and mortality rates than the Lagrangian particle-based method did. The Lagrangian method also offers the advantage of expanding the evaluation framework to include additional mechanisms of stress and injury on fish, or other aquatic biota, caused by hydrokinetic turbines and related devices. © 2014 Elsevier Ltd.


Xiao J.,Pacific Northwest National Laboratory | Wang X.,Brookhaven National Laboratory | Yang X.-Q.,Brookhaven National Laboratory | Xun S.,Lawrence Berkeley National Laboratory | And 4 more authors.
Advanced Functional Materials | Year: 2011