International Research Institute of Stavanger
International Research Institute of Stavanger
Huseby O.,Institute for Energy Technology |
Noevdal G.,International Research Institute of Stavanger |
Sagen J.,Institute for Energy Technology
SPE Journal | Year: 2010
Natural tracers (geochemical and isotopic variations in injected and formation waters) are a mostly unused source of information in reservoir modeling. On the other hand, conventional interwell tracer tests are an established method to identify flow patterns. However, they are typically underexploited, and tracer-test evaluations are often performed in a qualitative manner and are rarely compared systematically to simulation results. To integrate naturaland conventional-tracer data in a reservoir-modeling workflow, we use the ensemble Kalman filter (EnKF), which has recently gained popularity as a method for history matching. The EnKF includes online update of parameters and the dynamical states. An ensemble of model representations is used to represent the model uncertainty. In this paper, we include conventional water tracers as well as natural tracers (i.e., geochemical variations) in the EnKF approach. The methodology is demonstrated by estimating permeability and porosity fields in a synthetic field case based on a real North Sea field example. The results show that conventional tracers and geochemical variations yield additional improvement in the estimates and that the EnKF approach is well suited as a tool to include in this process. The principal benefit from the methodology is improved models and forecasts from reservoir simulations, through optimal use of conventional and natural tracers. Some of the natural-tracer data (e.g., scale-forming ions and toxic compounds) are monitored for other purposes, and exploiting such data can yield significant reservoir-model improvement at a small cost. Copyright © 2010 Society of Petroleum Engineers.
Fjelde I.,International Research Institute of Stavanger
SPE Reservoir Evaluation and Engineering | Year: 2010
The coinjection of carbon dioxide (CO2) and a CO 2-foaming agent to form stable CO2 foam has been found to improve the sweep efficiency during CO2-foam processes in carbonate reservoirs. However, only a few studies of CO2-foam transport in fractured rock have been reported. In fractured chalk reservoirs with low matrix permeability, the aqueous CO2-foaming-agent solution will flow mainly through the fractures. The total retention of the CO2-foaming agent in the reservoir will depend on how much of the matrix is contacted by the CO2-foaming-agent solution during the project period and, therefore, on its transport rate into the matrix. This paper presents results from a series of static and flow-through experiments carried out to investigate the transport and retention phenomena of CO2-foaming agents in fractured chalk models at 55°C. Fractured chalk models with 100% water-saturation and residual-oil saturation after waterflooding were used. In the static experiments, the fractured model was created by transferring core plugs with different diameters into steel cells with an annulus space around the plugs. The fracture volume was filled with foaming-agent solutions with different initial concentrations. The experiments were carried out in parallel, with liquid samples regularly taken out from the fracture above the plugs and analyzed for the foaming-agent concentration. The experiments were monitored until the concentrations in the fractures reached a plateau. At specific and constant concentrations of the foaming agent in the fractures, the plugs were demounted and samples drilled out along the whole lengths of the plugs from the outer, middle, and center portions. These samples were analyzed for foaming-agent concentration to determine how much of it had penetrated the matrix. Results indicate that the transport of the foaming-agent decreases toward the center of the plugs with 100% water-saturation and residual-oil saturation after waterflooding. Modeling of the static experiments using the Computer Modelling Group (CMG)'s commercial reservoir simulator STARS was also carried out to determine the transport rate for the foaming agent. A good history match between experimental and modeling results was obtained. In the flow-through experiments, the fractured model was created by drilling a concentric hole through the center of the plug. The hole, simulating an artificial fracture, was filled with glass beads of different dimensions. Fractured models with different effective permeability were flooded with equal volumes of the foaming-agent solution. Results show that the transport of CO 2-foaming agent into the matrix is slower in the fractured models than in the homogeneous models with viscous flooding of the rock. © 2010 Society of Petroleum Engineers.
Fjelde I.,International Research Institute of Stavanger |
Omekeh A.V.,International Research Institute of Stavanger |
Sokama-Neuyam Y.A.,University of Stavanger
Proceedings - SPE Symposium on Improved Oil Recovery | Year: 2014
Reduction of the salinity of injection water has been found to improve the oil recovery in some sandstone reservoir rocks, but for other sandstone rocks disappointing results have been obtained. No mechanism has been widely accepted as the reason for the improved oil recovery in low salinity water flooding (LSWF). Since alteration of wettability to more water-wet conditions has been reported in promising cases, desorption of crude oil components from the minerals surfaces should occur during LSWF. Retention of oil components onto sandstone reservoir rock saturated with brines of different compositions has been studied by injecting crude oils of different compositions. Results from experiments and simulation of rock-brine interactions have been compared. In the study it has been shown that the compositions of both low salinity water and the crude oil are important for the retention of polar oil components. For the original crude oil with the lowest base/acid ratio, the retention of oil components was found to vary with the brine composition even though the total salinity was the same. For this crude oil the measured decrease in the retention of oil components was found to be in accordance with simulated decrease in the total concentration of divalent cations on clay surfaces. This indicated that the retention of oil components were dominated by bonding of carboxylic groups to clay surfaces by cation-bridging. For treated crude oil with reduced amount of acidic components (highest base/acid ratio), the retention of polar oil components was not sensitive to the brine salinity/composition and was higher than for the original oil (with lowest base/acid ratio). The retention was for this oil probably due to direct adsorption of basic components. The retention was therefore found to be sensitive to the concentration of acids in the crude oil, and was increasing with increasing base/acid ratio. It has been shown the ionic composition of the low salinity brine and the composition of the crude oil are important for the retention of polar oil components and thereby the wettability. The optimum low salinity water composition will depend on the formation water composition, mineral composition/distribution of reservoir rock and oil composition, and for tertiary LSWF also the composition of earlier injection brine. Copyright 2014 , Society of Petroleum Engineers.
Yang W.X.,International Research Institute of Stavanger |
Escalona A.,University of Stavanger
73rd European Association of Geoscientists and Engineers Conference and Exhibition 2011: Unconventional Resources and the Role of Technology. Incorporating SPE EUROPEC 2011 | Year: 2011
The Guyana basin, located offshore Guyana and Suriname, lies just southeast of the giant oil and gas fields of Trinidad and Venezuela. Up to now, no major discoveries have been made, but hydrocarbon seeps along the coast line and oil shows in the continental shelf, indicate a working petroleum system. The basin is basically under-explored and exploration wells drilled in the 70's and 80's in addition to ODP wells drilled in 2003 in the Demerara rise showed a world-class Late Cretaceous source rock (Meyers et al., 2006). A key problem observed in the wells, is that most of the source rocks seem to be immature. In this study, we used interpreted 2D vintage seismic data together with well information to create a pseudo 3D basin model of the continental shelf of Guyana basin in order to better understand and evaluate the maturity of the source rock in time and space.
News Article | December 16, 2015
Someone hacks into the network companies that provide electric power, shutting down their systems. This will knock out the electricity in the entire region supplied by the network company, such as Lyse Elnett in Rogaland, Agder Energi Nett in the Agder counties, or Hafslund Nett, which supplies electricity to 1.5 million people in the Oslo area. A simultaneous attack on several network companies could affect large parts of the country. Trains would stop, planes would not be able to land, there would be no electricity, the water supply would stop and the sewerage system would break down. Hospitals have emergency generators and would manage for a while, but over a longer period of time, this would be critical for life and health. Add a cold winter, and it would not take much imagination to visualise the effects of such an attack. "This is the worst that could happen—a worst case scenario. The consequences for society would be huge", says Ruth Østgaard Skotnes. She is a researcher at the International Research Institute of Stavanger (IRIS) and Centre for Risk Management and Societal Safety (SEROS) at the University of Stavanger. She has recently completed a PhD in safety and security management of electric power supply networks. You may think that this sounds like a scene from an unrealistic disaster movie. "That's not the case", says Skotnes. "We must prepare ourselves for the improbable, and the threat to Norwegian energy providers is very much a reality." This is according to reports from, among others, the Norwegian National Security Authority (NSM), the Norwegian Police Security Service (PST), and the Norwegian Government's Cyber Security Strategy for Norway from 2012. "Everything indicates that we now must expect sophisticated attacks aimed at critical societal information, including information and communication technology (ICT) systems that operate industrial processes and critical infrastructure", says Skotnes. Over the past decades, modern ICT has been introduced for operation of the various parts of the electric power supply. Previously operated manually, electric power plants now control and monitor production and distribution systems from a few control centers. Process control systems were traditionally closed systems, however increased connectivity via standard ICT technologies has made these formerly isolated ICT systems vulnerable to a set of threats and risks they have not been exposed to before. Skotnes was therefore surprised to discover that the network companies themselves perceived this risk as relatively low, despite the fact that many of the companies had experienced attempts to hack into their process control systems. Some even reported about daily attacks from the outside. "Many network companies put too much trust in their own systems, and take it for granted that attacks will not be successful. This contrasts strongly with what research and reports from the authorities tell us", says Skotnes. Another reason may be that the network companies find it difficult to prepare for something that might happen, but hasn't happened yet. Up until now, Norway has been spared harmful cyber attacks on critical infrastructures. However, worldwide there have been several incidents of cyber attacks during the last few years. The best known of these, Stuxnet, was discovered in 2010. This was the first known computer worm that could spy on and reprogramme industrial control systems. Among other things, Stuxnet was supposed to have been used against and damaged the Iranian nuclear programme. The attacks on the twin towers in New York, the bombing of the Government Quarter in Oslo and the subsequent attack in Utøya on 22 July 2011 have taught us that the unthinkable can happen. "We need to be better prepared for attacks against critical infrastructure than we currently are in Norway", says Skotnes. So, how can the network companies protect themselves against cyber attacks? System updates and antivirus software are important measures, but vigilant employees and management commitment are just as important, according to Skotnes. Last year, around 50 companies in the oil and energy sector were exposed to the biggest cyber attack in Norway's history. In Statnett, the transmission system operator in Norway, the attempt was discovered by a vigilant employee, which meant that the company was able to prevent malicious software from being installed or run on computers within the company. "My study showed a strong relationship between management commitment to ICT safety and security, and the implementation of awareness creation and training measures for ICT safety and security in the network companies. I was told that it was difficult to implement measures if the management was not committed to the issue", says Skotnes. Involving the employees in the development of ICT safety and security measures can be a useful way to raise awareness in the network companies. This can make it easier for the employees to realize the benefits of these safety and security measures, and not consider practicality and efficiency as far more important for their work. Her thesis shows that there exists at least two different subcultures in today's network companies, depending on whether the people operating the process control systems have an education in ICT or a background from the electricity industry. The latter group generally focus on keeping the systems running without interruption. Downtime is not acceptable, and the most important thing for this group is constant supply of electricity. "Supply reliability is important, but this way of thinking has to change so that everyone understands how crucial ICT safety and security is", says Skotnes. Power production in Norway is more difficult to affect. Ruth Østgaard Skotnes chose to concentrate on power distribution because this is considered to be most critical for societal safety. She collected data for her thesis through a survey questionnaire that she sent to all the 137 network companies operating in Norway in 2012. Skotnes also interviewed representatives from the contingency planning department in the Norwegian Water Resources and Energy Directorate (NVE) who are responsible for safety, security, contingency planning and supervision in the Norwegian electric power supply sector. By 2019, smart meters (Advanced Metering Infrastructure) will be installed in all Norwegian households. Smart meters will provide increased capacity, reliability and efficiency of electric power supply, but will also increase the vulnerability to cyber attacks. "Society's vulnerability will increase because the number of possible entry points and paths for attacks are continually increasing. This is why we as a society need to take such threats seriously", says Skotnes. More information: Ruth Østgaard Skotnes: Challenges for safety and security management of network companies due to increased use of ICT in the electric power supply sector. Doctoral thesis, the Faculty of Social Science at the University of Stavanger, 2015
News Article | March 23, 2016
Jennifer Purcell watches intently as the boom of the research ship Skookum slowly eases a 3-metre-long plankton net out of Puget Sound near Olympia, Washington. The marine biologist sports a rain suit, which seems odd for a sunny day in August until the bottom of the net is manoeuvred in her direction, its mesh straining from a load of moon jellyfish (Aurelia aurita). Slime drips from the bulging net, and long tentacles dangle like a scene from an alien horror film. But it does not bother Purcell, a researcher at Western Washington University's marine centre in Anacortes. Pushing up her sleeves, she plunges in her hands and begins to count and measure the messy haul with an assuredness borne from nearly 40 years studying these animals. Most marine scientists do not share her enthusiasm for the creatures. Purcell has spent much of her career locked in a battle to find funding and to convince ocean researchers that jellyfish deserve attention. But she hasn't had much luck. One problem is the challenges that come with trying to study organisms that are more than 95% water and get ripped apart in the nets typically used to collect other marine animals. On top of that, outside the small community of jellyfish researchers, many biologists regard the creatures as a dead end in the food web — sacs of salty water that provide almost no nutrients for predators except specialized ones such as leatherback sea turtles (Dermochelys coriacea), which are adapted to consume jellies in large quantities. “It's been very, very hard to convince fisheries scientists that jellies are important,” says Purcell. But that's starting to change. Among the crew today are two fish biologists from the US National Oceanic and Atmospheric Administration (NOAA) whose research had previously focused on the region's rich salmon stocks. A few years ago, they discovered that salmon prey such as herring and smelt tend to congregate in different areas of the sound from jellyfish1 and they are now trying to understand the ecological factors at work and how they might be affecting stocks of valuable fish species. But first, the researchers need to know how many jellyfish are out there. For this, the team is taking a multipronged approach. They use a seaplane to record the number and location of jellyfish aggregations, or 'smacks', scattered about the sound. And on the research ship, a plankton net has been fitted with an underwater camera to reveal how deep the smacks reach. Correigh Greene, one of the NOAA scientists on board, says that if salmon populations are affected in some way by jellyfish, “then we need to be tracking them”. From the fjords of Norway to the vast open ocean waters of the South Pacific, researchers are taking advantage of new tools and growing concern about marine health to probe more deeply into the roles that jellyfish and other soft-bodied creatures have in the oceans. Initially this was driven by reports of unusually large jellyfish blooms wreaking havoc in Asia, Europe and elsewhere, which triggered fears that jellyfish were taking over the oceans. But mounting evidence is starting to convince some marine ecologists that gelatinous organisms are not as irrelevant as previously presumed. Some studies show that the animals are important consumers of everything from microscopic zooplankton to small fish, others suggest that jellies have value as prey for a wide range of species, including penguins, lobsters and bluefin tuna. There's also evidence that they might enhance the flow of nutrients and energy between the species that live in the sunlit surface waters and those in the impoverished darkness below. “We're all busy looking up at the top of the food chain,” says Andrew Jeffs, a marine biologist at the University of Auckland in New Zealand. “But it's the stuff that fills the bucket and looks like jelly snot that is actually really important in terms of the planet and the way food chains operate.” The animals in question are descendants of some of Earth's oldest multicellular life forms. The earliest known jellyfish fossil dates to more than 550 million years ago, but some researchers estimate that they may have been around for 700 million years, appearing long before fish. They're also surprisingly diverse. Some are tiny filter feeders that can prey on the zooplankton that few other animals can exploit. Others are giant predators with bells up to two metres in diameter and tentacles long enough to wrap around a school bus — three times. Jellyfish belong to the phylum Cnidaria and have stinging cells that are potent enough in some species to kill a human. Some researchers use the term jellyfish, or 'jellies' for short, to refer to all of the squishy forms in the ocean. But others prefer the designation of 'gelatinous zooplankton' because it reflects the amazing diversity among these animals that sit in many different phyla: some species are closer on the tree of life to humans than they are to other jellies. Either way, the common classification exists mainly for one dominant shared feature — a body plan that is based largely on water. This structure can make gelatinous organisms hard to see. Many are also inaccessible, living far out at sea or deep below the light zone. They often live in scattered aggregations that are prone to dramatic population swings, making them difficult to census. Lacking hard parts, they're extremely fragile. “It's hard to find jellyfish in the guts of predators,” says Purcell. “They're digested very fast and they turn to mush soon after they're eaten.” For most marine biologists, running into a mass of jellyfish is nothing but trouble because their collection nets get choked with slime. “It's not just that we overlooked them,” says Jonathan Houghton at Queen's University Belfast, UK. “We actively avoided them.” But over the past decade and a half, jellyfish have become increasingly difficult to ignore. Enormous blooms along the Mediterranean coast, a frequent summer occurrence since 2003, have forced beaches to close and left thousands of bathers nursing painful stings. In 2007, venomous jellyfish drifted into a salmon farm in Northern Ireland, killing its entire stock of 100,000 fish. On several occasions, nuclear power plants have temporarily shut down operations owing to jelly-clogged intake pipes. The news spurred scientists to take a closer look at the creatures. Marine biologist Luis Cardona at the University of Barcelona in Spain had been studying mostly sea turtles and sea lions. But around 2006, he shifted some of his attention to jellyfish after large summer blooms of mauve stingers (Pelagia noctiluca) had become a recurring problem for Spain's beach-goers. Cardona was particularly concerned by speculation that the jellyfish were on the rampage because overfishing had reduced the number of predators. “That idea didn't have very good scientific support,” he says. “But it was what people and politicians were basing their decisions on, so I decided to look into it.” For this he turned to stable-isotope analysis, a technique that uses the chemical fingerprint of carbon and nitrogen in the tissue of animals to tell what they have eaten. When Cardona's team analysed 20 species of predator and 13 potential prey, it was surprised to find that jellies had a major role in the diets of bluefin tuna (Thunnus thynnus), little tunny (Euthynnus alletteratus) and spearfish (Tetrapturus belone)2. In the case of juvenile bluefins, jellyfish and other gelatinous animals represented up to 80% of the total food intake. “According to our models they are probably one of the most important prey for juvenile bluefin tuna,” says Cardona. Some researchers have challenged the findings, arguing that stable-isotope results can't always distinguish between prey that have similar diets — jellyfish and krill both eat phytoplankton, for instance. “I'm sure it's not true,” Purcell says of the diet analysis. Fast-moving fish, she says, “have the highest energy requirements of anything that's out there. They need fish to eat — something high quality, high calorie.” But Cardona stands by the results, pointing out that stomach-content analyses on fish such as tuna have found jellyfish, but not krill. What's more, he conducted a different diet study3 that used fatty acids as a signature, which supported his earlier results on jellyfish, he says. “They're probably playing a more relevant role in the pelagic ecosystem of the western Mediterranean than we originally thought.” Researchers are reaching the same conclusion elsewhere in the world. On an expedition to Antarctica in 2010–11, molecular ecologist Simon Jarman gathered nearly 400 scat samples to get a better picture of the diet of Adélie penguins (Pygoscelis adeliae), a species thought to be threatened by global warming. Jarman, who works at the Australian Antarctic Division in Kingston, reported in 2013 that DNA analysis of the samples revealed that jellyfish are a common part of the penguin's diet4. Work that has yet to be published suggests the same is true for other Southern Ocean seabirds. “Albatrosses, gentoo penguins, king penguins, macaroni and rockhopper penguins — all of them eat jellyfish to some extent,” says Jarman (see 'Lean cuisine'). “Even though jellyfish may not be the most calorifically important food source in any area, they're everywhere in the ocean and they're contributing something to many top-level predators.” And some parts of jellyfish hold more calories than others. Fish have been observed eating only the gonads of reproductive-stage jellyfish, suggesting a knack for zeroing in on the most energy-rich tissues. Through DNA analyses, researchers are also discovering more about how jellyfish function as refuges in the open ocean. Scientists have long known that small fish, crustaceans and a wide range of other animals latch on to jellyfish to get free rides. But in the past few years, it has become clear that the hitchhikers also dine on their transport. In the deep waters of the South Pacific and Indian oceans, Jeffs has been studying the elusive early life stages of the spiny lobster (Panulirus cygnus). During a 2011 plankton-collecting expedition 350 kilometres off the coast of Western Australia, he and his fellow researchers hauled in a large salp (Thetys vagina), a common barrel-shaped gelatinous animal. The catch also included dozens of lobster larvae, including six that were embedded in the salp itself. DNA analysis of the lobsters' stomach glands revealed that the larvae had been feeding on their hosts5. Jeffs now suspects that these crustaceans, which support a global fishery worth around US$2 billion a year, depend heavily on this relationship. “What makes the larvae so successful in the open ocean,” he says, “is that they can cling to what is basically a big piece of floating meat, like a jellyfish or a big salp, and feed on it for a couple of weeks without exerting any energy at all.” Researchers are starting to recognize that jellyfish are important for other reasons, such as transferring nutrients from one part of the ocean to another. Biological oceanographer Andrew Sweetman at the International Research Institute of Stavanger in Norway has seen this in his studies of 'jelly falls', a term coined to describe what happens when blooms crash and a large number of dead jellies sink rapidly to the sea floor. In November 2010, Sweetman began to periodically lower a camera rig 400 metres to the bottom of Lurefjorden in southwestern Norway to track the fate of this fjord's dense population of jellyfish6. Previous observations from elsewhere had suggested that dead jellies pile up and rot, lowering oxygen levels and creating toxic conditions. But Sweetman was surprised to find almost no dead jellies on the sea floor. “It didn't make sense.” He worked out what was happening in 2012, when he returned to the fjord and lowered traps baited with dead jellyfish and rigged with video cameras. The footage from the bottom of the fjord showed scavengers rapidly consuming the jellies. “We had just assumed that nothing was going to be eating them,” he says. Back on land, Sweetman calculated7 that jelly falls increased the amount of nitrogen reaching the bottom by as much as 160%. That energy is going back into the food web instead of getting lost through decay, as researchers had thought. He's since found similar results using remotely operated vehicles at much greater depths in remote parts of the Pacific Ocean. “It's overturning the paradigm that jellyfish are dead ends in the food web,” says Sweetman. Such discoveries have elicited mixed responses. For Richard Brodeur, a NOAA fisheries biologist based in Newport, Oregon, the latest findings do not change the fact that fish and tiny crustaceans such as krill are the main nutrient source for most of the species that are valued by humans. If jellyfish are important, he argues, it is in the impact they can have as competitors and predators when their numbers get out of control. In one of his current studies, he's found that commercially valuable salmon species such as coho (Oncorhynchus kisutch) and Chinook (Oncorhynchus tshawytscha) that are caught where jellyfish are abundant have less food in their stomachs compared with those taken from where jellies are rare, suggesting that jellyfish may have negative impacts on key fish species. “If you want fish resources,” he says, “having a lot of jellyfish is probably not going to help.” But other researchers see the latest findings as reason to temper the growing vilification of jellyfish. In a 2013 book chapter8, Houghton and his three co-authors emphasized the positive side of jellies in response to what they saw as “the flippant manner in which wholesale removal of jellyfish from marine systems is discussed”. As scientists gather more data, they hope to get a better sense of exactly what role jellyfish have in various ocean regions. If jellies turn out to be as important as some data now suggest, the population spikes that have made the headlines in the past decade could have much wider repercussions than previously imagined. Back in Puget Sound, Greene is using a camera installed on a net to gather census data on a jellyfish smack. He watches video from the netcam as it slowly descends through a dense mass of creamy white spheres. At a depth of around 10 metres, the jelly curtain finally begins to thin out. Later, Greene makes a crude estimate. “Two point five to three million,” he says, before adding after a brief pause, “that's a lot of jellyfish.” A more careful count will come later. Right now there's plenty of slime to be hosed off the back deck. Once that's taken care of, the ship's engines come to life. The next jellyfish patch awaits.
Chen Y.,International Research Institute of Stavanger |
Oliver D.S.,University Research
SPE Reservoir Evaluation and Engineering | Year: 2014
Although ensemble-based data-assimilation methods such as the ensemble Kalman filter (EnKF) and the ensemble smoother have been extensively used for the history matching of synthetic models, the number of applications of ensemble-based methods for history matching of field cases is extremely limited. In most of the published field cases in which the ensemble-based methods were used, the number of wells and the types of data to be matched were relatively small. As a result, it may not be clear to practitioners how a real history-matching study would be accomplished with ensemble-based methods. In this paper, we describe the application of the iterative ensemble smoother to the history matching of the Norne field, a North Sea field, with a moderately large number of wells, a variety of data types, and a relatively long production history. Particular attention is focused on the problems of the identification of important variables, the generation of an initial ensemble, the plausibility of results, and the efficiency of minimization. We also discuss the challenges encountered in the use of the ensemblebased method for complex-field case studies that are not typically encountered in synthetic cases. The Norne field produces from an oil-and-gas reservoir discovered in 1991 offshore Norway. The full-field model consists of four main fault blocks that are in partial communication and many internal faults with uncertain connectivity in each fault block. There have been 22 producers and 9 injectors in the field. Water-alternating-gas injection is used as the depletion strategy. Production rates of oil, gas, and water of 22 producers from 1997 to 2006 and repeat-formation-tester (RFT) pressure from 14 different wells are available for model calibration. The full-field simulation model has 22 layers, each with a dimension of 46×112 cells. The total number of active cells is approximately 45,000. The Levenberg-Marquardt form of the iterative ensemble smoother (LM-EnRML) is used for history matching. The model parameters that are updated include permeability, porosity, and net-togross (ntg) ratio at each gridblock; vertical transmissibility at each gridblock for six layers; transmissibility multipliers of 53 faults; endpoint water and gas relative permeability of four different reservoir zones; depth of water/oil contacts; and transmissibility multipliers between a fewmain fault blocks. The total number ofmodel parameters is approximately 150,000. Distance-based localization is used to regularize the updates from LM-EnRML. LM-EnRML is able to achieve improved data match compared with the manually historymatched model after three iterations. Updates from LM-EnRML do not introduce artifacts in the property fields as in the manually history- matched model. The automated workflow is also much less labor-intensive than that for manual historymatching.
Breyholtz O.,International Research Institute of Stavanger |
Nygaard G.,International Research Institute of Stavanger |
Siahaan H.,International Research Institute of Stavanger |
Nikolaou M.,University of Houston
SPE Intelligent Energy Conference and Exhibition 2010 | Year: 2010
Managed pressure drilling (MPD) is emerging as a powerful technology for precise control of wellbore pressure within tight bounds. MPD comes in a number of variants, each taking a different approach to controlling pressure by creating a closed, pressurize mud circulation system. While MPD offers unprecedented pressure control capabilities, it creates operational complexity that renders many standard work flows unsuitable for reliable operation. This is because MPD requires that several tools (pumps, chokes, valves, etc) must be coordinated simultaneously, a task at which humans may not be particularly effective. A solution to this problem is the use of enabling automation tools. Such tools would reliably integrate MPD-related activities using a multi-level hierarchy, allowing humans to concentrate on higher-level decisions, while leaving the reliable execution of lower-level decisions to automation. In this paper a multi-level control approach of an MPD operation will be presented. The control hierarchy approach consists of three different levels of control; a feedback control level consisting of fast independent control loops, one supervisory control loop which coordinates the different control loops in an optimal way, and finally an optimization level which tries to meet operational targets while maximizing the economical aspects of the operation. The multi-level control approach will result in an autodriller system.
Lohne A.,International Research Institute of Stavanger |
Han L.,International Research Institute of Stavanger |
Van Velzen H.,Nederlandse Aardolie Maatschappij |
Twynam A.,British Petroleum |
And 2 more authors.
SPE Journal | Year: 2010
In this paper, we describe a simulation model for computing the damage imposed on the formation during overbalanced drilling. The main parts modeled are filter-cake buildup under both static and dynamic conditions; fluid loss to the formation; transport of solids and polymers inside the formation, including effects of porelining retention and pore-throat plugging; and salinity effects on fines stability and clay swelling. The developed model can handle multicomponent water-based-mud systems at both the core scale (linear model) and the field scale (2D radial model). Among the computed results are fluid loss vs. time, internal damage distribution, and productivity calculations for both the entire well and individual sections. The simulation model works, in part, independently of fluidloss experiments (e.g., the model does not use fluid-leakoff coefficients but instead computes the filter-cake buildup and its flow resistance from properties ascribed to the individual components in the mud). Some of these properties can be measured directly, such as particle-size distribution of solids, effect of polymers on fluid viscosity, and formation permeability and porosity. Other properties, which must be determined by tuning the results of the numerical model against fluid-loss experiments, are still assumed to be rather case independent, and, once determined, they can be used in simulations at altered conditions as well as with different mud formulations. A detailed description of the filter-cake model is given in this paper. We present simulations of several static and dynamic fluidloss experiments. The particle-transport model is used to simulate a dilute particle-injection experiment taken from the literature. Finally, we demonstrate the model's applicability at the field scale and present computational results from an actual well drilled in the North Sea. These results are analyzed, and it is concluded that the potential effects of the mechanistic modeling approach used are (a) increased understanding of damage mechanisms, (b) improved design of experiments used in the selection process, and (c) better predictions at the well scale. This allows for a more-efficient and more-realistic prescreening of drilling fluids than traditional core-plug testing. Copyright © 2010 Society of Petroleum Engineers.
Gravdal J.E.,International Research Institute of Stavanger |
Lorentzen R.J.,International Research Institute of Stavanger |
Fjelde K.K.,International Research Institute of Stavanger |
Vefring E.H.,International Research Institute of Stavanger
SPE Journal | Year: 2010
To manage the annular pressure profile during managed-pressure drilling (MPD) operations, simulations performed with advanced computer models are needed. To obtain a high degree of accuracy in these simulations, it is crucial that all parameters describing the system are as correct as possible. A new methodology for real-time updating of key parameters in a well-flow model by taking into account real-time measurements, including measuring uncertainty, is presented. Key model parameters are tuned using a recently developed estimation technique based on the traditional Kalman filter. The presented methodology leads to a more-accurate prediction of well-flow scenarios. Although the present study is motivated by applications in MPD, the idea of tuning model parameters should be of great importance in a wide area of applications. The performance of the filter is studied, using both synthetic data and real measurements from a North Sea high-pressure/high-temperature (HP/HT) drilling operation. Benefits by this approach are seen in more-accurate downhole-pressure predictions, which are of major importance for safety and economic reasons during MPD operations. Copyright © 2010 Society of Petroleum Engineers.