Bel Air, MD, United States
Bel Air, MD, United States
SEARCH FILTERS
Time filter
Source Type

ARMONK, N.Y., April 25, 2017 /PRNewswire/ -- IBM (NYSE: IBM) and Stone Ridge Technology today announced a performance milestone in reservoir simulation designed to help improve efficiency and lower the cost of production. Working with NVIDIA, the companies shattered previous published...


News Article | September 8, 2016
Site: www.greencarcongress.com

« Volkswagen Group & Anhui Jianghuai Automobile (JAC) jointly to develop EVs in China; new JV focused on NEVs | Main | Solaris Bus to offer BAE Systems hybrid electric drive on its vehicles » The Department of Energy’s Exascale Computing Project (ECP) announced its first round of funding with the selection of 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. The awards, totaling $39.8 million, target advanced modeling and simulation solutions to specific challenges supporting key DOE missions in science, clean energy and national security, as well as collaborations such as the Precision Medicine Initiative with the National Institutes of Health’s National Cancer Institute. Exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, or a factor of 50 to 100 times faster than the nation’s most powerful supercomputers in use today. The application efforts will help guide DOE’s development of a U.S. exascale ecosystem as part of President Obama’s National Strategic Computing Initiative (NSCI). DOE, the Department of Defense and the National Science Foundation have been designated as NSCI lead agencies, and ECP is the primary DOE contribution to the initiative. The ECP’s multi-year mission is to maximize the benefits of high performance computing (HPC) for US economic competitiveness, national security and scientific discovery. In addition to applications, the DOE project addresses hardware, software, platforms and workforce development needs critical to the effective development and deployment of future exascale systems. First-round funding (see list below) includes a broad set of modeling and simulation applications with a focus on portability, usability and scalability. A key consideration in the selection process was each team’s emphasis on co-design of the applications with the ECP’s ongoing development of hardware, software and computational capabilities, including physical models, algorithms, scalability and overall performance. Projects will be funded in the following strategic areas: energy security, economic security, scientific discovery, climate and environmental science, and healthcare. Leadership of the Exascale Computing Project comes from six DOE national laboratories: The Office of Science’s Argonne, Lawrence Berkeley, and Oak Ridge national labs, and NNSA’s Los Alamos, Lawrence Livermore, and Sandia national labs. The full list of application development awards follows: Full Funding: Computing the Sky at Extreme Scales, Salman Habib (ANL) with LANL, LBNL Exascale Deep Learning and Simulation Enabled Precision Medicine for Cancer, Rick Stevens (ANL) with LANL, LLNL, ORNL, NIH/NCI Exascale Lattice Gauge Theory Opportunities and Requirements for Nuclear and High Energy Physics, Paul Mackenzie (FNAL) with BNL, TJNAF, Boston University, Columbia University, University of Utah, Indiana University, UIUC, Stony Brook, College of William & Mary Molecular Dynamics at the Exascale: Spanning the Accuracy, Length and Time Scales for Critical Problems in Materials Science, Arthur Voter (LANL) with SNL, University of Tennessee An Exascale Subsurface Simulator of Coupled Flow, Transport, Reactions and Mechanics, Carl Steefel (LBNL) with LLNL, NETL QMCPACK: A Framework for Predictive and Systematically Improvable Quantum- Mechanics Based Simulations of Materials, Paul Kent (ORNL) with ANL, LLNL, SNL, Stone Ridge Technology, Intel, Nvidia Coupled Monte Carlo Neutronics and Fluid Flow Simulation of Small Modular Reactors, Thomas Evans (ORNL, PI) with ANL, INL, MIT NWChemEx: Tackling Chemical, Materials and Biomolecular Challenges in the Exascale Era, T. H. Dunning, Jr. (PNNL), with Ames, ANL, BNL, LBNL, ORNL, PNNL, Virginia Tech High-Fidelity Whole Device Modeling of Magnetically Confined Fusion Plasma, Amitava Bhattacharjee (PPPL) with ANL, ORNL, LLNL, Rutgers, UCLA, University of Colorado Data Analytics at the Exascale for Free Electron Lasers, Amedeo Perazzo (SLAC) with LANL, LBNL, Stanford Transforming Combustion Science and Technology with Exascale Simulations, Jackie Chen (SNL) with LBNL, NREL, ORNL, University of Connecticut Cloud-Resolving Climate Modeling of the Earth's Water Cycle, Mark Taylor (SNL) with ANL, LANL, LLNL, ORNL, PNNL, UCI, CSU The ECP is a collaborative effort of two DOE organizations: the Office of Science and the National Nuclear Security Administration. As part of President Obama’s National Strategic Computing initiative, ECP was established to develop a capable exascale ecosystem, encompassing applications, system software, hardware technologies and architectures, and workforce development to meet the scientific and national security mission needs of DOE in the mid-2020s timeframe.


Ghasemi M.,Stone Ridge Technology | Gildin E.,Texas A&M University
Journal of Petroleum Science and Engineering | Year: 2016

This paper introduces a new approach to construct an efficient reduced order model for fluid flow simulation and optimization in porous media. For nonlinear systems, one of the most common methodology used is the proper orthogonal decomposition (POD) combined with discrete empirical interpolation method (DEIM) due to its computational efficiency and good approximation. Whereas regular POD-DEIM approximates the fine scale model with just one single reduced subspace, the localized POD (LPOD) and localized DEIM (LDEIM) are constructed by computing several local subspaces. Each subspace characterize a period of solutions in time and all together they not only can approximate the high fidelity model better, but also can reduce the computational cost of simulation. LPOD and LDEIM use classification approach to find these regions in the offline computational phase. After obtaining each class, POD and DEIM is applied to construct the basis vectors of the reduced subspace. In the online phase, at each time step, the reduced states and nonlinear functions will be used to find the most representative basis vectors for POD and DEIM without requiring fine scale information. The advantages of LPOD and LDEIM are shown in a numerical example of two phase flow in porous media. © 2016 Elsevier B.V.


Grant
Agency: NSF | Branch: Standard Grant | Program: | Phase: | Award Amount: 508.00K | Year: 2011

This Small Business Innovation Research (SBIR) Phase II project will build an FPGA based Bio-informatic appliance for processing DNA sequence data faster, at lower cost and with less power. Over the last decade the cost of sequencing a genome has dropped by six orders of magnitude and the throughput of the process has increased by five orders of magnitude. The trend shows no sign of abating and industry experts expect the $1,000 genome mark to be reached in the next year. The combination of lower prices and higher throughput has lead to what is being called the data deluge or the the data tsunami. Taming this deluge has become a major issue in Bio-informatics and a principle bottleneck to further advances. The objective of this Phase II project is to contribute a solution to the processing problem based on Field Programmable Gate Arrays (FPGAs), non-conventional computing platforms that operate at significantly higher efficiency measured in cost and power per performance unit.

The mechanism of genetic coding, identified by Watson and Crick in 1953, was one of the premier scientific advances of the twentieth century. It took twenty more years to identify a feasible approach to decipher the genetic code of a particular individual and twenty more to actually implement it. The first human genome was sequenced in 2003. By 2010 less than 1,000 humans have been sequenced but rapidly decreasing costs and increasing throughput promise that the number will increase exponentially and medical researchers foresee the day in the near future when the whole population will be sequenced as part of standard medical practice. The advances that will be enabled by partial or full sequencing of the population will bring a revolution to health care ushering in an era of personal genetic based medicine. If successfully deployed, the proposed approach has the potential to address the so-called data deluge and bring about significant savings in both processing time and power consumption.


Grant
Agency: National Science Foundation | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 500.00K | Year: 2011

This Small Business Innovation Research (SBIR) Phase II project will build an FPGA based Bio-informatic appliance for processing DNA sequence data faster, at lower cost and with less power. Over the last decade the cost of sequencing a genome has dropped by six orders of magnitude and the throughput of the process has increased by five orders of magnitude. The trend shows no sign of abating and industry experts expect the $1,000 genome mark to be reached in the next year. The combination of lower prices and higher throughput has lead to what is being called "the data deluge" or the "the data tsunami". Taming this deluge has become a major issue in Bio-informatics and a principle bottleneck to further advances. The objective of this Phase II project is to contribute a solution to the processing problem based on Field Programmable Gate Arrays (FPGAs), non-conventional computing platforms that operate at significantly higher efficiency measured in cost and power per performance unit. The mechanism of genetic coding, identified by Watson and Crick in 1953, was one of the premier scientific advances of the twentieth century. It took twenty more years to identify a feasible approach to decipher the genetic code of a particular individual and twenty more to actually implement it. The first human genome was sequenced in 2003. By 2010 less than 1,000 humans have been sequenced but rapidly decreasing costs and increasing throughput promise that the number will increase exponentially and medical researchers foresee the day in the near future when the whole population will be sequenced as part of standard medical practice. The advances that will be enabled by partial or full sequencing of the population will bring a revolution to health care ushering in an era of personal genetic based medicine. If successfully deployed, the proposed approach has the potential to address the so-called data deluge and bring about significant savings in both processing time and power consumption.


Grant
Agency: Department of Energy | Branch: | Program: SBIR | Phase: Phase I | Award Amount: 149.57K | Year: 2016

Numerical reservoir simulation is the primary tool a reservoir engineer uses to predict future production from hydrocarbon reservoirs and to develop a plan to maximize production at minimal cost. Compositional reservoir simulation, in particular, is required to have an accurate physical description of the subsurface oil and gas in development situations involving gas injection, condensate production, or the coupling of reservoirs with multiple oil qualities. Because this simulation is extremely computationally intensive, engineers are often forced to simplify the models of their reservoirs, sacrificing accuracy and predictive capacity in order to allow their simulations to run in a practical time frame. To address this performance problem, an extremely high-performance compositional package will be added to the researchers’ existing reservoir simulator, which currently implements a simpler black-oil description of the hydrocarbons. The proposed compositional addition will utilize the exceptional throughput provided by graphics processing units (GPUs) in combination with a novel approach to the implicit solution of the nonlinear partial differential equations to drastically reduce simulation runtime. It is estimated that typical compositional reservoir models will run ten to one hundred times faster than with currently available reservoir simulators, enabling higher accuracy predictions in less time. In Phase I of this project, a compositional equation-of-state module will be developed, validated, optimized for efficient operation on GPUs, and benchmarked against published results. In the first year of Phase II, this module will be integrated into the existing high-performance simulator, and code for the novel nonlinear solution method will be implemented. In the second year of Phase II, the new package will be extensively validated with public benchmarks and proprietary models provided by customers, and the code will be optimized and hardened in preparation for commercial deployment. The advance in speed provided by the compositional reservoir simulator resulting from this project will enable higher quality decisions to be made about optimal field development. This can help increase the ultimate recovery from the nation’s finite petroleum resources. The density and cost effectiveness of the GPU computing platform used for the simulator will also make extremely high performance computing available in a workstation computer. This will make HPC much more accessible to smaller corporations, including engineering consulting firms that cannot afford the capital outlay and maintenance costs of a traditional HPC cluster. New computational methods and software will be developed to allow faster and more accurate simulations of the flow of subsurface water and hydrocarbons in petroleum reservoirs. The resulting reservoir simulation software will enable more efficient production of the nation’s petroleum reserves.


Grant
Agency: Department of Energy | Branch: | Program: SBIR | Phase: Phase II | Award Amount: 999.41K | Year: 2016

Numerical reservoir simulation is the primary tool a reservoir engineer uses to predict future production from hydrocarbon reservoirs and to develop a plan to maximize production at minimal cost. Compositional reservoir simulation, in particular, is required to have an accurate physical description of the subsurface oil and gas in development situations involving gas injection, condensate production, or the coupling of reservoirs with multiple oil qualities. Because this simulation is extremely computationally intensive, engineers are often forced to simplify the models of their reservoirs, sacrificing accuracy and predictive capacity in order to allow their simulations to run in a practical time frame. To address this performance problem, an extremely high-performance compositional package will be added to the researchers’ existing reservoir simulator, which currently implements a simpler black-oil description of the hydrocarbons. The proposed compositional addition will utilize the exceptional throughput provided by graphics processing units (GPUs) in combination with a novel approach to the implicit solution of the nonlinear partial differential equations to drastically reduce simulation runtime. It is estimated that typical compositional reservoir models will run ten to one hundred times faster than with currently available reservoir simulators, enabling higher accuracy predictions in less time. In Phase I of this project, a compositional equation-of-state module will be developed, validated, optimized for efficient operation on GPUs, and benchmarked against published results. In the first year of Phase II, this module will be integrated into the existing high-performance simulator, and code for the novel nonlinear solution method will be implemented. In the second year of Phase II, the new package will be extensively validated with public benchmarks and proprietary models provided by customers, and the code will be optimized and hardened in preparation for commercial deployment. The advance in speed provided by the compositional reservoir simulator resulting from this project will enable higher quality decisions to be made about optimal field development. This can help increase the ultimate recovery from the nation’s finite petroleum resources. The density and cost effectiveness of the GPU computing platform used for the simulator will also make extremely high performance computing available in a workstation computer. This will make HPC much more accessible to smaller corporations, including engineering consulting firms that cannot afford the capital outlay and maintenance costs of a traditional HPC cluster. New computational methods and software will be developed to allow faster and more accurate simulations of the flow of subsurface water and hydrocarbons in petroleum reservoirs. The resulting reservoir simulation software will enable more efficient production of the nation’s petroleum reserves.


Trademark
Stone Ridge Technology | Date: 2014-08-12

Software for simulating the flow of liquids and gas.


Trademark
Stone Ridge Technology | Date: 2015-09-22

Software for simulating the flow of liquids and gas underground for the oil and gas exploration and drilling industries and instruction manuals sold therewith. Software design and development.


Trademark
Stone Ridge Technology | Date: 2012-09-12

Computer hardware and software systems for electronically trading securities.

Loading Stone Ridge Technology collaborators
Loading Stone Ridge Technology collaborators