Madison, IL, United States
Madison, IL, United States

Time filter

Source Type

CAMBRIDGE, Mass. and EXTON, Pa., May 04, 2017 (GLOBE NEWSWIRE) -- Idera Pharmaceuticals, Inc. (NASDAQ:IDRA), a clinical-stage biopharmaceutical company focused on the discovery, development and commercialization of novel nucleic acid-based therapeutics for oncology and rare diseases, today reported its financial and operational results for the first quarter ended March 31, 2017. • Completed enrollment in the dose escalation cohorts of the ipilimumab combination arm of the ongoing Phase 1/2 clinical trial of intratumoral IMO-2125 in PD-1 refractory metastatic melanoma; • Commenced enrollment of the Phase 2 portion of the clinical trial of 8mg intratumoral IMO-2125 in combination with ipilimumab in PD-1 refractory melanoma; • Continued enrollment into the dose escalation cohorts of the pembrolizumab combination arm of the Phase 1/2 clinical trial of intratumoral IMO-2125 in PD-1 refractory metastatic melanoma; • Commenced and will continue to engage in discussions with global regulatory authorities regarding the path to registration for IMO-2125 in combination with ipilimumab in PD-1 refractory metastatic melanoma; • Activated first site for Phase 1 clinical trial of intratumoral IMO-2125 monotherapy in multiple tumor types; • Continued accruing patients into the IMO-8400 Phase 2 clinical trial in dermatomyositis which is being conducted at 22 sites both in the U.S. and abroad and is expected to complete enrollment in 2017 with data planned for the first half of 2018; and • Continued all pre-clinical and IND-enabling activities for IDRA 008, Idera’s first clinical candidate from the Third Generation Antisense (3GA) technology platform, with expected IND filing and initiation of human proof-of-concept clinical trial in the first half of 2018. “We are very focused on bringing the first treatment option to PD1 refractory metastatic melanoma patients.  Having selected our phase 2 dose earlier than we planned is good example of this focus,” stated Vincent Milano, Idera’s Chief Executive Officer.  “We set very clear objectives in the beginning of this year for each of our programs, and I am very pleased with the output and progress through the first several months of 2017.” Research and Development Program Updates IMO-2125 and IMO-8400 are the Company’s lead clinical development drug candidates.  IMO-2125 is an oligonucleotide-based agonist of Toll-like receptor (TLR) 9.  IMO-8400 is an oligonucleotide-based antagonist of TLRs 7, 8, and 9.  The Company also announced, in early 2017, the selection of the first development target from its proprietary 3GA technology platform.  The company plans to disclose the specific target, disease and clinical pathway in the second half of 2017.  The Company plans to take the first 3GA candidate into human proof of concept studies in 2018. Toll-like Receptor (TLR) Agonism Immuno-Oncology Program Idera’s development program in immuno-oncology is based on the rationale that intra-tumoral injections of IMO-2125, a TLR9 agonist, will activate dendritic cells and modulate the tumor microenvironment to potentiate the anti-tumor activity of checkpoint inhibitors and other immunotherapies.  This rationale is supported by pre-clinical data in multiple tumor types. Idera is currently conducting a Phase 2 clinical trial of intratumoral IMO-2125 in combination with ipilimumab, a CTLA4 antibody, and in a separate arm exploration of the combination of intratumoral IMO-2125 with pembrolizumab, an anti-PD1 antibody.  The Phase 1 dose exploration portion of the trial was conducted at the University of Texas MD Anderson Cancer Center and the Phase 2 portion of the trial is being conducted at multiple centers.  This trial is being conducted in patients with relapsed or refractory metastatic melanoma who have failed prior PD-1 therapy. In the second half of 2016, the Company announced positive preliminary clinical data from the initial dosing cohorts in the ipilimumab arm of the dose escalation portion of the trial.  The company has completed the dose escalation of intratumoral IMO-2125 in the ipilimumab arm of the trial and the combination appears generally well tolerated across all doses explored, without any dose-limiting toxicity and without reaching a maximally tolerated dose.  The company selected the 8mg dose for Phase 2 and enrollment is underway. The company has also continued enrollment into the pembrolizumab combination arm of the trial. Additionally, the company has begun and will continue to engage in discussions with regulatory authorities regarding the path to registration for IMO-2125 in combination with ipilimumab in PD-1 refractory metastatic melanoma patients. Also during the first quarter of 2017, the phase 1 trial of intratumoral IMO-2125 monotherapy in multiple tumor types has been activated and the first patient is expected to enroll early in the second quarter of 2017. At the 2017 ASCO-SITC Clinical Immuno-Oncology Symposium held February 23 through February 25, in Orlando, FL, Marc Uemura, M.D. of MD Anderson Cancer Center, presented an update of the ongoing IMO-2125 clinical trial in combination with ipilimumab in PD-1 refractory melanoma. At the 2017 American Academy of Cancer Research (AACR) Annual Meeting held, April 1 through April 5, in Washington, DC, Dr. Cara Haymaker of MD Anderson Cancer Center presented an update on the translational data outcomes in a poster presentation entitled, “Translational evidence of reactivated innate and adaptive immunity with intratumoral IMO-2125 in combination with systemic checkpoint inhibitors form a Phase 1/2 study in patients with anti-PD-1 refractory metastatic melanoma.” Additionally, on the same day, Daqing Wang, Ph.D., Principal Scientist, Idera Pharmaceuticals presented new IMO-2125 pre-clinical data in a poster entitled, “Local treatment with novel TLR9 agonist IMO-2125 demonstrates anti-tumor activity in preclinical models of pancreatic cancer.” Third Generation Antisense Platform (3GA) Idera’s proprietary third-generation antisense (3GA) platform technology is focused on silencing the mRNA associated with disease causing genes.  Idera has designed 3GA oligonucleotides to overcome specific challenges associated with earlier generation antisense technologies and RNAi technologies such as immunotoxicities and less than optimal therapeutic index. Over the past two years, Idera has generated 22 unique compounds developed to target specific genes across a wide variety of therapeutic areas such as rare diseases, oncology, autoimmune disorders, metabolic conditions and diseases driven by a single point mutation.  The company is currently conducting activities ranging from cell culture through IND-enabling toxicology.  The current portfolio is designed to create both internal development candidates as well as partnering opportunities for disease areas outside of Idera’s stated focus. The first partnering endeavor is demonstrated through Idera’s collaboration with GSK developing an undisclosed 3GA gene target for renal conditions.  Idera and GSK entered into the collaboration in late 2015 and GSK’s stated goal is to achieve selection of clinical development candidate in the first quarter of 2018. Additionally, in January of 2017, Idera announced selection of its first internal candidate to enter clinical development.  For strategic and competitive purposes, Idera is withholding naming the specific target until the second half of 2017.  Idera has selected a well-established liver target, with available, validated pre-clinical animal models, well-understood clinical endpoints, which has the potential for both rare and broader disease applications.  Idera is currently conducting the IND-enabling toxicology for this program and expects to file and IND and enter the clinic in 2018. Toll-like Receptor (TLR) Antagonism Dermatomyositis Clinical Development Program In late 2015, Idera announced the initiation of a Phase 2 clinical trial of IMO-8400 in patients with dermatomyositis, a rare auto-immune condition, which negatively affects skin and may result in debilitating muscle weakness. TLRs have been reported to play an important role in the pathogenesis of the disease.  This randomized, double-blind, placebo controlled Phase 2 trial is expected to enroll 36 patients and is being conducted at 22 clinical sites worldwide.  The Company plans to complete enrollment of this trial by the end of 2017 and have clinical data available in 2018. Financial Results First Quarter Results Net loss applicable to common stockholders for the three months ended March 31, 2017 was $15.1 million, or $0.10 per basic and diluted share, compared to a net loss applicable to common stockholders of $12.8 million, or $0.11 per basic and diluted share, for the same period in 2016. Research and development expenses for the three months ended March 31, 2017 totaled $11.5 million compared to $9.3 million for the same period in 2016. General and administrative expense for the three months ended March 31, 2017 and March 31, 2016 were $4.1 million and $3.9 million, respectively. As of March 31, 2017, our cash, cash equivalents and investments totaled $91.3 million.  We currently anticipate our cash position is capable of funding our operations into the second quarter of 2018. About Idera Pharmaceuticals, Inc. Idera Pharmaceuticals is a clinical-stage biopharmaceutical company developing novel nucleic acid-based therapies for the treatment of certain cancers and rare diseases. Idera’s proprietary technology involves using a TLR-targeting technology, to design synthetic oligonucleotide-based drug candidates to act by modulating the activity of specific TLRs. In addition to its TLR programs, Idera has created a third generation antisense technology platform using its proprietary technology to inhibit the production of disease-associated proteins by targeting RNA. To learn more about Idera, visit www.iderapharma.com. This press release contains forward-looking statements within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. All statements, other than statements of historical fact, included or incorporated in this press release, including statements regarding the Company's strategy, future operations, collaborations, intellectual property, cash resources, financial position, future revenues, projected costs, prospects, clinical trials, plans, and objectives of management, are forward-looking statements. The words "believes," "anticipates," "estimates," "plans," "expects," "intends," "may," "could," "should," "potential," "likely," "projects," "continue," "will," and "would" and similar expressions are intended to identify forward-looking statements, although not all forward-looking statements contain these identifying words. Idera cannot guarantee that it will actually achieve the plans, intentions or expectations disclosed in its forward-looking statements and you should not place undue reliance on the Company's forward-looking statements. There are a number of important factors that could cause Idera's actual results to differ materially from those indicated or implied by its forward-looking statements. Factors that may cause such a difference include: whether the Company’s cash resources will be sufficient to fund the Company’s continuing operations and the further development of the Company’s programs for the period anticipated; whether interim results from a clinical trial, such as the preliminary results reported in this release, will be predictive of the final results of the trial; whether results obtained in preclinical studies and clinical trials such as the results described in this release will be indicative of the results that will be generated in future clinical trials, including in clinical trials in different disease indications; whether products based on Idera's technology will advance into or through the clinical trial process when anticipated or at all or warrant submission for regulatory approval; whether such products will receive approval from the U.S. Food and Drug Administration or equivalent foreign regulatory agencies; whether, if the Company's products receive approval, they will be successfully distributed and marketed; whether the Company's collaborations will be successful; and such other important factors as are set forth under the caption "Risk factors" in the Company’s Annual Report filed on Form 10-K for the period ended December 31, 2016 and the Quarterly Report on Form 10-Q for the period ended March 31, 2017. Although Idera may elect to do so at some point in the future, the Company does not assume any obligation to update any forward-looking statements and it disclaims any intention or obligation to update or revise any forward-looking statement, whether as a result of new information, future events or otherwise.


News Article | March 1, 2017
Site: www.futurity.org

Scientists have created an artificial synapse, the space over which neurons communicate. It could help computers better recreate the way the human brain processes information and lead to improvements in brain-machine technologies. “It works like a real synapse but it’s an organic electronic device that can be engineered,” says Alberto Salleo, associate professor of materials science and engineering at Stanford University and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.” The new artificial synapse, reported in Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory. This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware. When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings. “Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” says Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.” The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two. Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with accuracy between 93 to 97 percent. Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals. “More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” says A. Alec Talin of Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.” This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory. This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices. Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons. All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing. Funding came from the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the US Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo, and the Brazilian National Council.


News Article | February 21, 2017
Site: www.eurekalert.org

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain's efficient design - an artificial version of the space over which neurons communicate, called a synapse. "It works like a real synapse but it's an organic electronic device that can be engineered," said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. "It's an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that's been done before with inorganics." The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory. This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware. When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we've learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings. "Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time," said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. "Instead of simulating a neural network, our work is trying to make a neural network." The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two. Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts. Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network's ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent. Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals. "More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry," said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. "We've demonstrated a device that's ideal for running these type of algorithms and that consumes a lot less power." This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory. This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices. Every part of the device is made of inexpensive organic materials. These aren't found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain's chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons. All this means it's possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing. Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands. This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia's Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.


News Article | November 8, 2016
Site: www.eurekalert.org

New findings suggest the rate at which CO2 is accumulating in the atmosphere has plateaued in recent years because Earth's vegetation is grabbing more carbon from the air than in previous decades. That's the conclusion of a new multi-institutional study led by a scientist from the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab). It's based on extensive ground and atmospheric observations of CO2, satellite measurements of vegetation, and computer modeling. The research is published online Nov. 8 in the journal Nature Communications. To be clear, human activity continues to emit increasing amounts of carbon, and the atmospheric concentration of CO2, now at 400 parts per million (ppm), continues to rise. But the scientists found that between 2002 and 2014, the rate at which CO2 increased in the atmosphere held steady at about 1.9 ppm/year. In addition, the proportion of the CO2 emitted annually by human activity that remains in the atmosphere declined by about 20 percent. This slowdown can't keep pace with emissions, so the overall amount of human-caused CO2 in the atmosphere increased, just not as quickly. And for that, new research suggests, we can thank plants. "This highlights the need to identify and protect ecosystems where the carbon sink is growing rapidly," says Trevor Keenan, a research scientist in Berkeley Lab's Climate & Ecosystem Sciences Division and the corresponding author of the paper. The scientists attribute the stalled CO2 growth rate to an uptick in land-based photosynthetic activity, fueled by rising CO2 levels from fossil fuel emissions. It's a snowball effect: as CO2 levels rise in the atmosphere, photosynthetic activity flourishes and plants take in more carbon, sparking more plant growth, more photosynthesis, and more carbon uptake. They also identified another player. Plant respiration, a process in which plants use oxygen and produce CO2, did not increase as quickly as photosynthesis in recent years. This is because plant respiration is sensitive to temperature, and it was affected by the recent slowdown in global warming that was observed most significantly over vegetated land. So, between 2002 and 2014, plants took in more CO2 through photosynthesis, but did not "exhale" more CO2 into the atmosphere through respiration. "These changes decreased the amount of anthropogenic CO2 that stays in the atmosphere, and thus slowed the accumulation of atmospheric CO2," says Keenan. Their findings provide a possible answer to a climatic mystery. The growth rate of atmospheric CO2 climbed steadily during the latter half of the 20th century, from 0.75 ppm/year in 1959 to 1.86 ppm/year in 2002. But Keenan and colleagues discovered an inflection point last year when they analyzed the latest data from the Global Carbon Project, which quantifies carbon emissions and their sources annually. Since 2002, the growth rate has remained flat. This pause is especially surprising because it has occurred as human activity pumps more and more carbon into the atmosphere. All that CO2 must be going somewhere, so the scientists suspected something about the carbon cycle has recently changed in a big way. "We believed one of the planet's main carbon sinks had unexpectedly strengthened. The question was: which one?" says Keenan. The scientists ruled out oceans as a dominant cause because most computer models agree the amount of carbon taken in by oceans has increased steadily in recent years. That left terrestrial ecosystems, which undergo a large year-to-year variability in carbon uptake, and the two biggest influences on this variability are photosynthesis and plant respiration. To study these influences, the scientists used ten "global dynamic vegetation models" that predict how the terrestrial carbon cycle changes over time. They also used a model that incorporates satellite measurements of vegetation cover and plant activity to predict global photosynthesis and respiration rates. They validated the model by comparing its results with data from AmeriFlux and FLUXNET, which are networks of eddy-covariance research towers that measure ecosystem carbon, water, and energy fluxes in North and South America. Berkeley Lab manages AmeriFlux for the Department of Energy. Model projections were generated using different scenarios of atmospheric CO2 level, temperature, soil moisture, and other processes. This enabled the researchers to evaluate the impacts of these processes on the planet's terrestrial carbon cycle. Taken together, the models zeroed in on rising CO2 levels as having the biggest impact on photosynthesis and plant respiration. The result is a boost in terrestrial carbon uptake, particularly in tropical and high-latitude ecosystems. Specifically, the models suggest rising CO2 levels caused terrestrial ecosystems to double the rate at which they take in carbon, from between one and two petagrams of carbon per year in the 1950s, to between two and four petagrams of carbon per year in the 2000s. For comparison, human activity emits between nine and ten petagrams of carbon per year (one petagram is one trillion kilograms). The scientists conclude this increase in carbon uptake put the brakes on the growth rate of atmospheric CO2 between 2002 and 2014. "Unfortunately, this increase is nowhere near enough to stop climate change," says Keenan, adding that their results answer questions and pose new ones. "We've shown the increase in terrestrial carbon uptake is happening, and with a plausible explanation why. But we don't know exactly where the carbon sink is increasing the most, how long this increase will last, or what it means for the future of Earth's climate." The research is partly funded by the Laboratory Directed Research and Development Program of Berkeley Lab and the Department of Energy's Office of Science. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 23, 2016
Site: www.eurekalert.org

Researchers have revealed new atomic-scale details about pesky deposits that can stop or slow chemical reactions vital to fuel production and other processes. This disruption to reactions is known as deactivation or poisoning. The research team employed a combination of measurements, including X-ray experiments at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), to gather the most detailed information yet on problematic carbon-based deposits called "coke," and to find ways to prevent its formation or reduce its effects. Detailed in the Nov. 23 edition of Scientific Reports, the study focused on deposits on HZSM-5, a common type of catalyst known as a zeolite that is used in biofuel production and in the refinery industry. Catalysts are materials that help to foster and speed up reactions without being used up in the process, and coke deposits in zeolites are a costly problem in petroleum refinement and in petrochemical production. This study was part of a larger R&D effort to convert ethanol and other compounds derived from biomass (organic matter) into renewable fuels. A common way to restore the catalyst's function is to heat and burn away the coke compounds, though this can cause permanent damage to the catalyst's structure. The latest study provides new insight about coke formation, both inside and on the surface of the catalyst, that could aid researchers in designing new catalysts or otherwise changing the chemical components and parameters to improve catalyst performance. The multiple methods that the research team used to explore the chemistry of the coke deposits are helping to set the stage for a new suite of tools in the works at Berkeley Lab's Advanced Light Source, dubbed AMBER (Advanced Materials Beamline for Energy Research), that will combine multiple X-ray and other experimental techniques in one place. "The concept of AMBER is to have a variety of techniques available on the same beamline so you can tackle a problem more comprehensively -- not only with X-rays, but also with techniques available at standard labs," said Zahid Hussain, division deputy for scientific support at the ALS. "You will be able to make materials, characterize them, and analyze them in one place. It is 'one-stop shopping,'" Hussain said, adding that scientists will be able to study active chemistry with AMBER, too. Pacific Northwest National Laboratory (PNNL) has partnered with Berkeley Lab to develop AMBER, and the latest study was conducted by PNNL researchers and provides an example of work that would benefit from this "multimodal" approach planned at AMBER, Hussain said. AMBER is expected to be ready for experiments within two years. "We focused on where the coke is on an atomic scale, with high resolution," said Arun Devaraj, a staff scientist at PNNL who was the study's lead author. "We wanted to understand this coking mechanism and where it blocks the reaction and how it blocks it. By using a combination of techniques, we could compare them to one another and form a complete story." An X-ray technique called X-ray absorption spectroscopy, performed at the ALS, along with a form of nuclear magnetic resonance spectroscopy at PNNL, provided a chemical signature for the coke molecules. Also, the research team used a method called atom probe tomography (APT) at PNNL to collect 3-D reconstructed images showing how carbon-containing molecules responsible for coking were distributed in the pores of the catalyst. Researchers found that an uneven distribution of aluminum in the fresh zeolite catalyst caused an uneven distribution of coke deposits during ethanol-conversion reactions. The coke buildup appears to be associated with aluminum-rich regions within the pores of HZSM-5. Karthikeyan K. Ramasamy, also a PNNL staff scientist, said this information about the coke formations and locations will be useful as the team explores other ways to convert ethanol, a common additive in gasoline, and other compounds generated from renewable resources into fuel sources compatible with existing fueling systems. He also said a team of researchers at PNNL is working on several pathways to convert biomass into higher-value compounds. "Understanding coke molecules in zeolites will provide broad benefits across the refinery and renewable energy industries, and zeolite is one of the most highly utilized catalysts," he said. Hussain said that in addition to chemistry research, the AMBER station now in development at Berkeley Lab's ALS will be used for battery and artificial photosynthesis work, and to support other renewable energy research. Some of the work was performed at PNNL's Environmental Molecular Sciences Laboratory (EMSL). The Advanced Light Source and EMSL are DOE Office of Science User Facilities. The work was supported by the Bioenergy Technologies Office within DOE's Office of Energy Efficiency & Renewable Energy and by PNNL's Laboratory Directed Research and Development Program. Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.


News Article | November 17, 2016
Site: cleantechnica.com

How important is clever federal government policy in pushing down the costs of developing and installing renewable energy? In Australia, we have gone to lengths to prove it is not the only thing that matters – sheer demand and clever technology mean it gets there in the end, at least in household solar. But in the US, we have just been offered a pretty compelling reminder of how good federal policy can be a game-changer, a job creator and economic win-win. In an announcement that might be a last ditch effort to shore up America’s solar industry before a potentially destructive Donald Trump takes the reins in January, the Obama administration’s US Department of Energy has made a new commitment for its SunShot Initiative to cut the cost of solar-generated electricity by another 50% between 2020 and 2030. The goal is to slash the cost of energy over the next decade, to bring down the average cost of utility scale solar in the US to US3c/kWh, and the cost in the sunniest areas to just US2c/kWh. That will beat any fossil fuel technology by a wide margin it makes you wonder how a president Trump could ever “save” the coal industry as he has promised. The SunShot Initiative was launched in 2011 “with the goal of making solar electricity cost-competitive with traditional energy sources without subsidies by 2020.” But what’s really worth noting is that in just five years the Initiative has achieved more than 90 per cent of that goal to cut the cost of utility-scale solar electricity in the US to $US0.06c/kWh. Today, in America, the cost of utility-scale solar averages at $US0.07/kWh. The program has also aced more than 70 per cent of its commercial and residential PV cost targets – also in just five years. The SunShot 2030 targets aim to double down; halving utility-scale costs again to $US0.03/kWh; and aiming for $US0.04c/kWh for commercial solar, and $US0.05c/kWh for residential. And as the DoE points out, these are just the targets for areas with an average climate and without subsidies. In the country’s sunnier regions, they’re looking to push prices as low as $US0.02c/kWh for utility-scale solar. At these prices, the DoE points to recent modelling that suggests the US could more than double the projected amount of nationwide electricity demand that could be met by solar in 2030 and beyond. And there’s no reason to suggest they can’t smash these new targets – Congress willing – like they did first time around. To support the effort, up to $25 million is being made available to improve PV module and system design, including hardware and software solutions that facilitate the rapid installation and interconnection of PV systems. Another $30 million will be made available for projects that accelerate the commercialisation of products and solutions that can help to drive down the cost of solar energy. And $10 million will be put aside for projects focused on improving solar irradiance and power forecasts used by utilities. The DoE said it was now accepting applications for the three funding opportunities under the SunShot Initiative’s PV Research and Development Program, Technology to Market Program, and Systems Integration Program. But will SunShot be safe under a Donald Trump administration? There has been some speculation about this since last week’s election result. But there is a level of confidence that the property tycoon’s capitalist leanings and “make America great” mantra will see SunShot for what it is: a policy success story. “The focus of the Sunshot initiative is to help the American solar industry and companies to reduce their costs to make them more competitive,” said Christopher Mansour, vice president of federal affairs for America’s Solar Energy Industries Association (SEIA), in an interview with PV Magazine last week. “That’s a kind of a pro-Capitalist thing that the federal government get involved in.” While Trump doesn’t put solar in the same basket with wind (hates it), one of his main quibbles with PV is that it’s “too expensive” – a complaint he shares with some of Australia’s leading Conservative politicians. But Mansour is confident this misconception can be cleared up. “Part of our job has been is to talk with people on (Trump’s) campaign and his transition, to really point out that this is information that he has which may be a little bit out of date. That costs have come down tremendously, and we anticipate that they are going to continue to come down. We are competitive, and we are getting more competitive by the day,” Mansour told PV Magazine. “I think that once we have an opportunity to talk with his people and talk with his incoming team a little bit more, and show him that this is a giant job creator for the United States. “We are going to go from having 209,000 Americans working in solar from the end of 2019 to 400,000 by the end of 2020. These are good-paying jobs that are attractive to a lot of American workers. I am very confident they will see the advantages of current policy, and where it is helping the solar industry grow.” Buy a cool T-shirt or mug in the CleanTechnica store!   Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.


News Article | February 17, 2017
Site: www.eurekalert.org

Finding new functional materials is always tricky. But searching for very specific properties among a relatively small family of known materials is even more difficult. But a team from Northwestern Engineering and Los Alamos National Laboratory found a workaround. The group developed a novel workflow combining machine learning and density functional theory calculations to create design guidelines for new materials that exhibit useful electronic properties, such as ferroelectricity and piezoelectricity. Few layered materials have these qualities in certain geometries -- crucial for developing solutions to electronics, communication, and energy problems -- meaning there was very little data from which to formulate the guidelines using traditional research approaches. "When others look for new materials, typically they look in places where they have a lot of data from similar materials. It's not necessarily easy by any means, but we do know how to distill information from large datasets," said James M. Rondinelli, assistant professor of materials science and engineering in the McCormick School of Engineering. "When you don't have a lot of information, learning from the data becomes a difficult problem." The research is described in the paper "Learning from data to design functional materials without inversion symmetry," appearing in the Feb. 17, 2017, issue of Nature Communications. Prasanna Balachandran of Los Alamos National Lab in New Mexico is the paper's coauthor. Joshua Young, a former graduate student in Rondinelli's lab, and Turab Lookman, a senior researcher at Los Alamos, also contributed. Supported by funding from the National Science Foundation and the Laboratory Directed Research and Development Program through Los Alamos, Rondinelli's group focused on a class of two-dimensional complex oxides -- or Ruddlesden-Popper oxides. These materials exhibit many technology-enabling properties, such as ferroelectricity and piezoelectricity, and can be interfaced with traditional semiconductor materials found in today's electronic devices. "In this family, the data set is puny. Currently, there are only around 10 to 15 materials that are known with the desired properties," Rondinelli said. "This is not much data to work with. Traditionally data science is used for big data problems where there is less of a need for domain knowledge." "Despite the small data nature of the problem," Balachandran added, "our approach worked because we were able to combine our understanding of these materials (domain knowledge) with the data to inform the machine learning." Therefore, the group began building a database of known materials and using machine learning, a subfield of computer science that builds algorithms capable of learning from data and then using that learning to make better predictions. "With machine learning, we are able to identify chemical compositions that are likely candidates for the material you want to develop," he said. Of the more than 3,000 possible materials investigated, the data science approach found more than 200 with promising candidates. Next, the team applied several types of rigorous quantum mechanical calculations. This assessed the atomic structures of the potential materials and checked their stability. "We wondered: Would the material have the predicted structure? Does it have electric polarization? Can it be made in a laboratory?" Rondinelli added. This work narrowed the possibilities to 19, which were recommended for immediate experimental synthesis. Yet there are likely many more possibilities among the 200 candidates. Typically, when developing new materials, the number of possibilities is too large to explore and develop each one. The process of screening potential materials is very expensive, and scientists must be selective in their investments. "Our work has the potential to help save enormous amounts of time and resources," Balachandran said. "Instead of exploring all possible materials, only those materials that have the potential to be promising will be recommended for experimental investigation."


News Article | October 12, 2016
Site: www.chromatographytechniques.com

Mars' largest moon, Phobos, has captured public imagination and been shrouded in mystery for decades. But numerical simulations recently conducted at Lawrence Livermore National Laboratory (LLNL) have shed some light on the enigmatic satellite. The dominant feature on the surface of Phobos (22-kilomters across) is Stickney crater (9-km across), a mega crater that spans nearly half the moon. The crater lends Phobos a physical resemblance to the planet-destroying Death Star in the film "Star Wars." But over the decades, understanding the formation of such a massive crater has proven elusive for researchers. For the first time, physicists at LLNL have demonstrated how an asteroid or comet impact could have created Stickney crater without destroying Phobos completely. The research, which also debunks a theory regarding the moon's mysterious grooved terrain, was published in Geophysical Review Letters. "We've demonstrated that you can create this crater without destroying the moon if you use the proper porosity and resolution in a 3D simulation," said Megan Bruck Syal, an author on the paper and member of the LLNL planetary defense team. "There aren't many places with the computational resources to accomplish the resolution study we conducted." The study showed that there is a range of possible solutions for the size and speed of the impactor, but Syal says one possible scenario is an impact object 250 meters across traveling close to 6 kilometers per second (kps). Previous studies used 2D simulations at lower resolutions, and they were ultimately unable to replicate Stickney crater successfully. Additionally, prior studies failed to account for the porosity of the Phobos' crust in their calculations, critical given that Phobos is less dense than the Martian surface. While the simulations show how a massive impact could have created Stickney crater, they also appear to disprove a related theory. Some have theorized that the hundreds of parallel grooves that appear to radiate from the crater were caused by the impact. However, the simulations in this study show that fracture patterns in the crust of Phobos would be nothing like the straight, long, parallel grooves. On the other hand, the simulations do support the possibility of slow-rolling boulders mobilized by the impact causing the grooves. But more study would be required to fully test that theory. The research served as a benchmarking exercise for the LLNL planetary defense team in their use of an open source code developed at LLNL calledSpheral. The team uses codes like Spheral to simulate various methods of deflecting potentially hazardous Earth-bound asteroids. "Something as big and fast as what caused the Stickney crater would have a devastating effect on Earth," Syal said. "If NASA sees a potentially hazardous asteroid coming our way, it will be essential to make sure we're able to deflect it. We'll only have one shot at it, and the consequences couldn't be higher. We do this type of benchmarking research to make sure our codes are right when they will be needed most." The foundation this research is built upon is decades of investment in LLNL computational capabilities used to ensure the safety, security and effectiveness of the U.S. nuclear deterrent in the absence of nuclear testing - commonly known as stockpile stewardship. This research was also funded in part by the Laboratory Directed Research and Development Program at LLNL. The study was spearheaded by Jared Rovny, a summer student visiting from Yale University. Other co-authors include LLNL computational physicist Mike Owen, who supported the research by mentoring Rovny and aligning the study to benchmark the Spheral code, and Paul Miller, who leads the planetary defense team at LLNL. Syal conducted followup modeling to confirm the findings and wrote the published paper. She will give a talk on the paper in Pasadena this month during the annual meeting of the American Astronomical Society's Division of Planetary Science.


News Article | October 6, 2016
Site: www.greencarcongress.com

« Ford F-150 fuel economy improves with all-new EcoBoost engine and 10-speed transmission | Main | DOE to invest $30M to further H2 and fuel cell technology as industry continues strong growth » The US Department of Energy’s Bioenergy Technologies Office (BETO) has established the Agile BioFoundry (ABF)—a new consortium of nine Energy Department national laboratories working to standardize and streamline the entire biomanufacturing pipeline by uniting computer-assisted biological pathway design, process integration, process scale-up, and machine learning. Lawrence Berkeley National Laboratory is leading the consortium. The national laboratories have a unique capability to provide facilities and tools and to share the cumulative knowledge gained to accelerate progress in both public and private sectors. This coordinated public effort will aid in efficiently realizing BETO’s goal to commercialize biofuels and renewable chemicals. The ABF, with $35 million requested for fiscal year 2017, will achieve its goals by developing a system for product development and commercialization that will be broadly applicable to many bioproducts. The system will be an iterative Design-Build-Test-Learn process that involves the following: The ABF hopes to achieve a 40% reduction in energy intensity and 60% reduction in carbon intensity over current manufacturing processes, and also has a goal of reducing product time to market by 50%. The ABF plans to serve as a resource for data, software tools, analytical and biological protocols, and economic analyses, disseminating information through its website. In addition, the ABF will seek to actively partner with industry to ensure that the latest developments in synthetic biology are broadly accessible, enabling bio-derived products to reach the market more quickly. Development of the ABF is part of BETO’s Conversion Research and Development Program, which funds projects to develop technologies for converting biomass feedstocks into viable biofuels and bioproducts, moving toward an economically competitive and environmentally sustainable bioeconomy.


News Article | October 12, 2016
Site: phys.org

The dominant feature on the surface of Phobos (22-kilomters across) is Stickney crater (9-km across), a mega crater that spans nearly half the moon. The crater lends Phobos a physical resemblance to the planet-destroying Death Star in the film "Star Wars." But over the decades, understanding the formation of such a massive crater has proven elusive for researchers. For the first time, physicists at LLNL have demonstrated how an asteroid or comet impact could have created Stickney crater without destroying Phobos completely. The research, which also debunks a theory regarding the moon's mysterious grooved terrain, was published in Geophysical Review Letters. "We've demonstrated that you can create this crater without destroying the moon if you use the proper porosity and resolution in a 3D simulation," said Megan Bruck Syal, an author on the paper and member of the LLNL planetary defense team. "There aren't many places with the computational resources to accomplish the resolution study we conducted." The study showed that there is a range of possible solutions for the size and speed of the impactor, but Syal says one possible scenario is an impact object 250 meters across traveling close to 6 kilometers per second (kps). Previous studies used 2D simulations at lower resolutions, and they were ultimately unable to replicate Stickney crater successfully. Additionally, prior studies failed to account for the porosity of the Phobos' crust in their calculations, critical given that Phobos is less dense than the Martian surface. While the simulations show how a massive impact could have created Stickney crater, they also appear to disprove a related theory. Some have theorized that the hundreds of parallel grooves that appear to radiate from the crater were caused by the impact. However, the simulations in this study show that fracture patterns in the crust of Phobos would be nothing like the straight, long, parallel grooves. On the other hand, the simulations do support the possibility of slow-rolling boulders mobilized by the impact causing the grooves. But more study would be required to fully test that theory. The research served as a benchmarking exercise for the LLNL planetary defense team in their use of an open source code developed at LLNL called Spheral. The team uses codes like Spheral to simulate various methods of deflecting potentially hazardous Earth-bound asteroids. "Something as big and fast as what caused the Stickney crater would have a devastating effect on Earth," Syal said. "If NASA sees a potentially hazardous asteroid coming our way, it will be essential to make sure we're able to deflect it. We'll only have one shot at it, and the consequences couldn't be higher. We do this type of benchmarking research to make sure our codes are right when they will be needed most." The foundation this research is built upon is decades of investment in LLNL computational capabilities used to ensure the safety, security and effectiveness of the U.S. nuclear deterrent in the absence of nuclear testing - commonly known as stockpile stewardship. This research was also funded in part by the Laboratory Directed Research and Development Program at LLNL. The study was spearheaded by Jared Rovny, a summer student visiting from Yale University. Other co-authors include LLNL computational physicist Mike Owen, who supported the research by mentoring Rovny and aligning the study to benchmark the Spheral code, and Paul Miller, who leads the planetary defense team at LLNL. Syal conducted followup modeling to confirm the findings and wrote the published paper. She will give a talk on the paper in Pasadena this month during the annual meeting of the American Astronomical Society's Division of Planetary Science. Explore further: Anomalous grooves on Martian moon Phobos explained by impacts

Loading Research and Development Program collaborators
Loading Research and Development Program collaborators