Netherlands
Netherlands

Time filter

Source Type

News Article | December 21, 2016
Site: www.theenergycollective.com

Production from the giant Groningen gas field in the Netherlands has been cut drastically after increasingly heavy earthquakes began to affect the region. Some would like to see Dutch gas production stop altogether, but according to geophysicist (ex-Shell) Jilles van den Beukel Groningen gas could still play a valuable role in North West European energy supply. In the short term it offers a better compromise from a financial, environmental and energy security point of view than alternatives like coal or Russian gas. Earthquake hazards can be managed with a hand-on-the tap strategy – as long as we accept that risks cannot be reduced to zero. The proceeds from the gas production should be used for a much needed energy transition. The giant Groningen gas field was found in 1959 and started producing in 1963. Remaining reserves are about 800 billion cubic metres (bcm) (vs. about 2800 bcm of initial reserves). This is good for 25 years of Dutch consumption at current levels. The field is exploited by NAM (50% Shell, 50% ExxonMobil) on behalf of the “Maatschap Groningen”, a partnership of NAM (60%) and EBN (40%). EBN is wholly owned by the Dutch state. Including the effect of taxes, 85 – 90% of the profits from the field go to the Dutch state. Income from gas production (which is for about 80% coming from Groningen) accounted for about 10% of state income over the 1975 – 2015 period. Over the years earthquakes related to gas production have increased in number and magnitude. The shallow depth of these earthquakes implies that earthquake intensity and damage to houses are relatively large compared to the seismic magnitude. A turning point was the 2012 Huizinge 3.6 magnitude earthquake which damaged thousands of houses. This was followed by the realization that the estimate for the maximum potential magnitude of 3.9 for a Groningen earthquake, assumed so far, was likely to be too low (and subject to considerable uncertainty) and that seismicity could not only result in material damage but potentially also in loss of life. As a result the Dutch government has put in place a production cap for the field (as well as separate caps for the areas most affected by seismicity), leading to a significant decrease in production and government income. By October 1, 2016, the production cap has been lowered to 24 BcM per year (in 2013, the last year prior to the production caps, yearly production was still as high as 54 BcM). The double hit from production caps and declining gas prices has resulted in a dramatic decrease of government income from Dutch gas production from about 15 bn Euro (2013) to just over 2 bn Euro (2016). The first registered earthquake in the Northern Netherlands related to gas production took place near Assen in 1986. The first registered earthquake from the Groningen field occurred in 1991. In 1993 a joint NAM, KNMI (the Dutch meteorological institute – which also has a section monitoring seismicity in the Netherlands) and SODM (the government entity supervising the oil and gas industry) study (in Dutch) confirmed the relation between gas production and earthquakes. Prior to 1986 no earthquakes whatsoever had been recorded in the tectonically quiet Northern Netherlands. What has not helped the public image of the NAM in the long term is that from 1986 to 1993 they denied a relation between gas production and earthquakes. Instead of admitting that when earthquakes start to appear at producing gas fields (in an area without any recorded seismicity so far) there is very likely to be a relation they went into denial mode. This denial mode continued for many years. The 1993 joint report contained a section in which an estimate was made of the maximum potential magnitude of a Groningen earthquake. This method (using a Gutenberg Richter relation between earthquake magnitude and frequency) played a key role in KNMI reports with estimates of maximum magnitudes for years to come. The problem is that deriving a maximum potential magnitude from historically observed seismicity, using the Gutenberg Richter relation, is something that can only be done for stationary situations (as is usually the case for natural earthquakes). A depleting gas field, however, is not a stationary situation at all. If anything it gave a lower bound for the maximum magnitude of a future Groningen earthquake. For Groningen, as depletion progressed, the KNMI gradually revised its estimates from an initial 3.3 (1993) to an eventual 3.9 (2006), upon which it eventually became increasingly clear that this method was not defendable. For a long time both NAM and SODM were not proactive in starting further research (for instance to determine uncertainty estimates on KNMI’s estimates or to invite alternative views from other institutes). The focus of the NAM was very much on subsidence rather than on seismicity. An in depth review is given in a report of the Dutch Safety Board on Groningen earthquake risks. There was a genuine expectation in NAM that earthquakes could only result in limited material damage. This was not backed up by solid research, however, and in hindsight uncertainties were severely underestimated. The Groningen gas is produced from Rotliegend sandstone reservoir (in the pores in between the sandstone grains). As gas is being produced, the pressure in the gas decreases (the initial pressure was 347 bar; by 2016 this has been reduced to about 95 bar) and, due to the weight of the overburden (there are about 3 km of rock above the gas bearing sandstone) the sandstone compacts. Compaction of the gas bearing reservoir results in subsidence at the surface. Compaction on its own does not result in seismicity, however. The Groningen earthquakes are due to the local variations in compaction. The Groningen field is affected by numerous faults and these faults can lead to strong local variations in compaction. If material on one side of a fault compacts more than the other side, shear stresses on the fault build up and at some stage the strength of material at the fault (a zone of weakness) is overcome after which rupture takes place. There are a number of ways in which faults can lead to differential compaction. The most straightforward one is that faults displace the gas bearing reservoir vertically. As a result compaction at a given depth varies laterally. Secondly, faults can be hindrances ( “baffles”) to gas flow. Gas production can lead to pressure differences across faults which can result in differential compaction and seismicity. Until recently NAM’s production philosophy has been to minimize pressure differences across the field. As a result it seems unlikely that pressure differences between different field compartments have made a large contribution to differential compaction and Groningen seismicity. Finally, faults are often surrounded by a zone (“damage zone“) where reservoir quality is impaired and porosities are substantially reduced. These damage zones will compact substantially less than normal reservoir (a reduction of the compaction by a factor 2 or more is possible). Horizontal wells in other Dutch fields that have encountered such damage zones have found them to be about 50-100 m wide. Damage zones may be the most pronounced for strike slip zones (zones of intense strike slip faulting; i.e. faulting where material has moved in a horizontal rather than vertical direction). For the Groningen field these are the Loppersum and Eemskanaal areas in particular. Seismicity correlates with compaction and fault intensity (i.e. a high number of faults). We find that most earthquakes occur in areas where compaction is relatively large. In addition, within the areas of high compaction, seismicity is the most pronounced in areas with a high intensity of faulting. The intersection of numerous NW-SE oriented faults (often with a strike slip component) with the high compaction area near Loppersum experiences the highest seismic intensity over the field. An area with a similar high amount of compaction and a low fault intensity further towards the North experiences a much lower level of seismicity. Seismicity is not limited to faults with large vertical displacements. In fact there is no clear correlation between fault vertical displacement and seismicity at all. This indicates that fault damage zones may play an important role in Groningen seismicity. The Groningen field is intersected by zones of anastomosing faults, indicative of strike slip faulting (with horizontal rather than vertical displacement), in particular in the Loppersum and Eemskanaal areas. Elongated damage zones in these areas (such as the one encountered by the Eemskanaal-2 well) can account for the observation that seismicity is distributed over a larger area (rather than being concentrated on the few large vertical displacement boundary faults) and is associated with faults that often do not have a large vertical displacement. Time dependent processes such as creep play an important role.  There is a clear correlation between production and seismicity on a seasonal basis. There is a certain time lag (of about 3 – 6 months) in between, however (with the production, until recently, peaking in winter and seismicity in the subsequent spring and summer). Time dependent processes such as creep (the gradual deformation of material) play an important role. Already during the 1980’s it was realised that compaction and subsidence were much more complex than expected initially. A key development was Hans de Waal’s work at Shell’s research lab (and following thesis at Delft University) on rate dependent compaction of sandstone reservoirs. To date, this work is still the basis for most of the Groningen subsidence models. Seismicity has increased over time with total reservoir compaction. A threshold exists of about 10 – 15 cm of reservoir compaction before the onset of seismicity. Beyond this threshold earthquakes have (so far) increased in both number and magnitude with reservoir compaction. The threshold reflects the amount of compaction that is apparently needed to bring shear stresses on faults (or at least on some locations, on some faults) to the critical level required for rupture. Initially, stresses on faults at the gas bearing Rotliegend level are not anywhere near the level required for rupture. This is in accordance with the complete absence of natural earthquakes in the Northern Netherlands. Over time the partitioning fraction (the fraction of energy related to compaction that is released seismically) has increased with increasing compaction. How this trend will continue in future is simply unknown (see below) – as yet the areas over the Groningen field that have experienced more than 30 – 35 cm of compaction are very small. The total amount of reservoir compaction due to gas production is expected to range up to about 50 cm in future. Our understanding of Groningen earthquakes – and in particular our ability to predict future earthquake intensity – remains limited. Our predictions on future seismicity are to a disappointing extent still based on statistics and extrapolation of observed trends (rather than a complete understanding and geomechanical modelling of the actual physical processes – something that is simply out of reach). Will the increase of seismicity with increasing compaction continue?  So far the partitioning fraction (the fraction of energy related to compaction that is released seismically) has increased with increasing compaction. We simply do not know whether this will continue. The absence of earthquakes during the first 20 years of production turned out to be no guarantee that earthquakes would remain absent over the entire field life. In the same way, the increasing intensity of seismicity (with increasing compaction) as observed over the 1991-2012 period is no guarantee for its continued increase in future. Predictions of future earthquake intensity depend to a large extent on the assumptions (implicit or explicit) made for the future partitioning factor in these models. The models in the technical addendum to the NAM production plan tend to predict relatively high levels of future seismicity as they assume that the partitioning fraction continues to increase at the same rate. What is the effect of the recent production measures?  These measures have resulted in a different way of producing the Groningen field that contains a number of elements: a field-wide production cap, specific production caps for the Loppersum and Eemskanaal clusters and a more constant production rate over the year. It is not straightforward to separate the effects of these three different elements on seismicity. For the overall production cap the question is whether such a production rate decrease will merely result in the same amount of seismic energy now being released over a longer period or will also result in a reduction of the total amount of seismic energy to be released. For an area production cap such as the one for the Loppersum area the question is whether this only buys this area a temporary reprieve (and seismicity resumes once that the pressure decline resumes – something that will happen within about a year) or whether a more gradual pressure decline in the Loppersum area will result in a long term reduction of seismicity as well. A more gradual pressure drop (due to a smaller seasonal variation in production) may also result in a reduction of seismicity. For all of these three questions, TNO (the major Dutch institute on applied scientific research, advising the ministry of economic affairs on the technical aspects of the Groningen field) is more optimistic about the effect of these production measures than NAM. As yet the TNO view is an expectation – which may or may not be confirmed by future observations. It is too early to draw firm conclusions. My personal opinion is that the marked decrease in seismicity in 2016 gives us some hope that TNO is correct in their more optimistic expectations. Regardless of the eventual outcome I would argue that these uncertainties can be managed: This should enable us to manage production with a hand on the tap strategy (similar to what has been put in place for the gas production in the Waddenzee area) in a responsible way. Make choices on the basis of a cost benefit analysis that involves a realistic estimate of risks and alternatives to Groningen gas. The risk that people in the Groningen area are running due to earthquakes is comparable with some other risks that people in the Netherlands are running. Flooding is a risk that many people in the Netherlands are subjected to. And yet we make a conscious decision not to raise our dikes to a level that gives absolute security. The reason is simple: cost. People living close to the Schiphol airport area near Amsterdam are subjected to certain risks. Yet we don’t close down Schiphol, given its importance to the Dutch economy. Currently, the risk of a fatality in Groningen as a result of earthquakes is estimated to be very small (<< 1 per year). Should risk levels stay roughly at this level the number of fatalities over the coming 30 years would be expected to be of an order of magnitude of 1. A large uncertainty exists for such an estimate. But at least we have a calibration point: the last 5 years did not result in any injuries or fatalities. In contrast the number of traffic deaths in Groningen over the coming 30 years is expected to be of an order of magnitude of 1000. Decisions on Groningen production caps should be made on the basis of a cost benefit analysis – as is common practice for other (industrial) activities that involve risk. The cost for the Dutch state of billions of euros (even if the gas not produced now would eventually be produced in 20 or 30 years from now) is becoming disproportional to the risk to human life that these earthquakes pose (for comparison: a single investment of €120 million in Groningen provincial roads would be expected to save approximately 5 lives on a yearly basis). The Dutch Council for Security rightly condemned the fact that safety did not play a role in decision making on Groningen production until a few years ago. But now we run the risk of going to the other end of the spectrum: that safety with respect to Groningen earthquakes needs to be achieved at all cost. To govern is to make choices and this implies evaluating alternatives. Alternatives to Groningen gas are more costly and less environmentally friendly. Coal is the most polluting fossil fuel by far. Russian gas, apart from security and political issues, implies both additional CO2 emissions (roughly 12% of the gas is needed to transport gas from Russia to The Netherlands due to the low efficiency of the Russian gas transport system) and additional methane emissions (methane losses related to transport over large distances in Russia are expected to be significantly higher than methane losses related to transport over short distances in The Netherlands). Effective emissions (CO2 equivalent) are estimated to be about 25% higher compared to Groningen gas. From an environmental point of view a drastic reduction in gas consumption is by far the preferred option. It is not going to happen in the short term. Groningen gas production has already been cut by approximately 50%; the energy transition will take decades. In the short term not overly reducing Groningen production has clear advantages from a financial, security and environmental point of view. In the long term, the proceeds can be used to finance a much needed energy transition. Put in limits to the acceptance or evaluation of damage claims from areas where the risk is virtually zero. Over the past few years the number of damage claims has rapidly increased to over 500 claims per week on average. This increase is not related to an increase of seismicity at all. From 2012 to 2015 the total seismic energy released (per year) decreased by about a factor 5; the number of claims (per year) increased by about a factor 5. Additional information can be found in the 2016 Technical Addendum to the NAM Groningen Winningsplan. It is expected that houses close to the epicenter and subjected to higher PGA’s have a higher chance of being damaged (PGA stands for peak ground acceleration; together with the strength of houses it determines housing damage). The largest earthquake in 2012 was the 3.6 magnitude Huizinge earthquake; the largest 2015 earthquake was a much smaller 3.1 magnitude event near Hellum. For the Huizinge earthquake there is indeed a strong correlation between PGA (which in turn has a strong relation to distance to epicenter) and the percentage of buildings with damage claims. For the Hellum earthquake on the other hand no such correlation is observed and the vast majority of claims come from areas at a large distance from the epicenter with minimal PGA. The large increase in claims in recent years is a reflection of the intense publicity and the calls from NGO’s and politicians to put in claims rather than an increase in actual earthquake related damage. This situation is now becoming difficult to manage. The cumulative number of claims of over 60,000 implies a huge effort, at a significant cost to society (the cost of evaluating claims by now outweighs the cost of strengthening houses and compensating damages (in Dutch)). Efforts that now go to dealing with numerous claims from areas with minimal PGA’s should go to dealing with the cases of real and substantial damage (or risk) in a more efficient and generous way. A situation that all the benefits are for the country, all the downsides are for the local gas producing region is not fair. I think that the current unrest in Groningen is not just related to earthquakes but also to a long standing feeling among people that they have been badly treated by the central government. And here they have a point. When investments from FES, the Dutch infrastructure fund which received 40% of Groningen gas income for the 1995-2009 period, were analyzed it was found that almost 90% went to the “Randstad” area in the west of the country (Amsterdam – The Hague – Rotterdam); only about 1% of the investments went to the three Northern provinces combined. In general there are good reasons why the proceeds from mineral wealth should go to a country as a whole. But for this specific case where a single field has accounted for about €300 billion in income and where the downside to the local population is long standing and substantial, this just does not seem fair. It seems justified to use a part of the Groningen revenue to establish a fund that solely supports the Groningen local economy and infrastructure. Jilles van den Beukel worked as a geologist, geophysicist and project manager for Shell in many parts of the world. A longer version of this paper (containing a number of figures and maps as well as additional background information) was published on his blog JillesonEnergy.


News Article | March 2, 2017
Site: www.realclimate.org

Gavin has already discussed John Christy’s misleading graph earlier in 2016, however, since the end of 2016, there has been a surge in interest in this graph in Norway amongst people who try to diminish the role of anthropogenic global warming. I think this graph is warranted some extra comments in addition to Gavin’s points because it is flawed on more counts beyond those that he has already discussed. In fact, those using this graph to judge climate models reveal an elementary lack of understanding of climate data. Different types of numbers The upper left panel in Fig. 1 shows that Christy compared the average of 102 climate model simulations with temperature from satellite measurements (average of three different analyses) and weather balloons (average of two analyses). This is a flawed comparison because it compares a statistical parameter with a variable. A parameter, such as the mean (also referred to as the ‘average’) and the standard deviation, describe the statistical distribution of a given variable. However, such parameters are not equivalent to the variable they describe. The comparison between the average of model runs and observations is surprising, because it is clearly incorrect from elementary statistics (This is similar statistics-confusion as the flaw found in the Douglass et al. (2007)). I can illustrate this with an example: Fig. 2 shows 108 different model simulations of the global mean temperature (from the CMIP5 experiment). The thick black line shows the average of all the model runs (the ‘multi-model ensemble’). None of the individual runs (coloured thin curves) match the mean (thick black curve), and if I were to use the same logic as Christy, I could incorrectly claim that the average is inconsistent with the individual runs because of their different characters. But the average is based on all these individual runs. Hence, this type of logic is obviously flawed. To be fair, the observations shown in Cristy’s graph were also based on averages, although of a small set of analyses. This does not improve the case because all the satellite data are based on the same measurements and only differ in terms of synthesis and choices made in the analyses (they are highly correlated, as we will see later on). By the way, one of the curves shown in Fig. 2 is observations. Can you see which? Eyeballing such curves, however, is not the proper way to compare different data, and there are numerous statistical tests to do so properly. Different physical aspects Christy compared temperatures estimated for the troposphere (satellites and balloons) temperature computed by global climate models. Th is is a fact because th e data portal where he obtained the model results was the KNMI ClimateExplorer. ClimateExplorer does not hold upper air temperature stored as 3D-fields (I checked this with Geert Jan van der Oldenborgh)(correction: ‘taz’ is zonally integrated temperature as a function of height but does not take into account the differences between land and sea. Nevertheless, this variable still does not really correspond closely with those measured from satellites) A proper comparison between the satellite temperature and the model results needs to estimate a weighted average of the temperature over the troposphere and lower stratosphere with an appropriate altitude-dependent weighting. The difference between the near-surface and tropospheric temperature matters as the stratosphere has cooled in contrast to the warming surface. Temperature from satellites are also model results It is fair to compare the satellite record with model results to explore uncertainties, but the satellite data is not the ground truth and cannot be used to invalidate the models. The microwave sounding unit (MSU), the instrument used to measure the temperature, measures light in certain wavelength bands emitted by oxygen molecules. An algorithm is then used to compute the air temperature consistent with the measured irradiance. This algorithm is a model based on the same physics as the models which predict that higher concentrations of CO result in higher surface temperatures. I wonder if Christy sees the irony in his use of satellite temperatures to dispute the effect of CO on the global mean temperature. It is nevertheless reassuring to see a good match between the balloon and satellite data, which suggests that the representation of the physics in both the satellite retrieval algorithm and the climate models are more or less correct. How to compare the models with observations The two graphs (courtesy of Gavin) below show comparisons between tropospheric mean temperatures (TMT) that are comparable to the satellite data and include confidence interval for the ensemble rather than just the ensemble mean. This type of comparisons is more consistent with standard statistical tests such as the students t-test. The graphs also show several satellite-based analyses: the Remote Sensing Systems (RSS; different versions), University of Alabama in Huntsville (UAH; Different versions), and NOAA (STAR). All these curves are so similar (highly correlated) that taking the average doesn’t make much difference. According to Fig. 3, the tropospheric temperature simulated by the global climate models (from the CMIP5 experiment) increased slightly faster than the temperatures derived from the satellite measurements between 2000 and 2015, but they were not very different. The RSS temperatures gave the closest match with the global climate models. Fig. 4 shows a trend analysis for the 1979-2016 interval where the satellite-based temperature trends are shown with appropriate error bars. The trends from the satellite analyses and the model results overlap if the confidence limits are taken into consideration. The story behind the upper tropospheric warming The biggest weight of the troposphere temperature trends come from the tropics because it accounts for the largest volume (half of the Earth’s surface area lies between 30°S and 30°N due to its geometric shape), and they are therefore sensitive to conditions around the equator. This is also where large-scale convection takes place that produce bands of high clouds (the Inter-Tropical Convergence Zone – ITCZ). Cloud formation through convection and condensation is associated with release of latent heat and influences the temperatures (e.g. Vecchi et al., 2006). It is part of the hydrological cycle, and a slow change in the atmospheric overturning, moisture and circulation patterns is expected to have a bearing on the global tropospheric temperature trend estimates. This means that the picture is complex when it comes to the global tropospheric temperature trends because many physical processes have an influence that take place on a wide range of spatial scales. Hard evidence of misrepresentation Despite the complicated nature of tropospheric temperatures, it is an indisputable fact that Christy’s graph presents numbers with different meanings as if they were equivalent. It is really surprising to see such a basic misrepresentation in a testimony at the U.S. House Committee on Science, Space & Technology. One of the most elementary parts of science is to know what the numbers really represent and how they should be interpreted.


Van Der Schrier G.,KNMI | Van Oldenborgh G.J.,KNMI
Climate of the Past | Year: 2011

The Central Netherlands Temperature (CNT) is a monthly daily mean temperature series constructed from homogenized time series from the centre of the Netherlands. The purpose of this series is to offer a homogeneous time series representative of a larger area in order to study large-scale temperature changes. It will also facilitate a comparison with climate models, which resolve similar scales. From 1906 onwards, temperature measurements in the Netherlands have been sufficiently standardized to construct a high-quality series. Long time series have been constructed by merging nearby stations and using the overlap to calibrate the differences. These long time series and a few time series of only a few decades in length have been subjected to a homogeneity analysis in which significant breaks and artificial trends have been corrected. Many of the detected breaks correspond to changes in the observations that are documented in the station metadata. This version of the CNT, to which we attach the version number 1.1, is constructed as the unweighted average of four stations (De Bilt, Winterswijk/Hupsel, Oudenbosch/Gilze-Rijen and Gemert/Volkel) with the stations Eindhoven and Deelen added from 1951 and 1958 onwards, respectively. The global gridded datasets used for detecting and attributing climate change are based on raw observational data. Although some homogeneity adjustments are made, these are not based on knowledge of local circumstances but only on statistical evidence. Despite this handicap, and the fact that these datasets use grid boxes that are far larger then the area associated with that of the Central Netherlands Temperature, the temperature interpolated to the CNT region shows a warming trend that is broadly consistent with the CNT trend in all of these datasets. The actual trends differ from the CNT trend up to 30 %, which highlights the need to base future global gridded temperature datasets on homogenized time series. © Author(s) 2011.


Katsman C.A.,KNMI | Van Oldenborgh G.J.,KNMI
Geophysical Research Letters | Year: 2011

Over the period 2003-2010, the upper ocean has not gained any heat, despite the general expectation that the ocean will absorb most of the Earth's current radiative imbalance. Answering to what extent this heat was transferred to other components of the climate system and by what process(-es) gets to the essence of understanding climate change. Direct heat flux observations are too inaccurate to assess such exchanges. In this study we therefore trace these heat budget variations by analyzing an ensemble of climate model simulations. The analysis reveals that an 8-yr period without upper ocean warming is not exceptional. It is explained by increased radiation to space (45%), largely as a result of El Nio variability on decadal timescales, and by increased ocean warming at larger depths (35%), partly due to a decrease in the strength of the Atlantic meridional overturning circulation. Recently-observed changes in these two large-scale modes of climate variability point to an upcoming resumption of the upward trend in upper ocean heat content. Copyright 2011 by the American Geophysical Union.


Lin J.-T.,Harvard University | McElroy M.B.,Harvard University | Boersma K.F.,KNMI
Atmospheric Chemistry and Physics | Year: 2010

A new methodology is developed to constrain Chinese anthropogenic emissions of nitrogen oxides (NOx) from four major sectors (industry, power plants, mobile and residential) in July 2008. It combines tropospheric NO 2 column retrievals from GOME-2 and OMI, taking advantage of their different passing time over China (∼10:00 a.m. LT (local time) versus ∼02:00 p.m.) and consistent retrieval algorithms. The approach is based on the difference of NOx columns at the overpass times of the two instruments; it thus is less susceptible to the likely systematic errors embedded in individual retrievals that are consistent with each other. Also, it explicitly accounts for diurnal variations and uncertainties of NOx emissions for individual sources. Our best top-down estimate suggests a national budget of 6.8 TgN/yr (5.5 TgN/yr for East China), close to the a priori bottom-up emission estimate from the INTEX-B mission for the year of 2006. The top-down emissions are lower than the a priori near Beijing, in the northeastern provinces and along the east coast; yet they exceed the a priori over many inland regions. Systematic errors in satellite retrievals are estimated to lead to underestimation of top-down emissions by at most 17% (most likely 10%). Effects of other factors on the top-down estimate are typically less than 15% each, including lightning, soil emissions, mixing in planetary boundary layer, anthropogenic emissions of carbon monoxide and volatile organic compounds, magnitude of a priori emissions, assumptions on emission diurnal variations, and uncertainties in the four sectors. The a posteriori emission budget is 5.7 TgN/yr for East China.


De Haan S.,KNMI
Atmospheric Measurement Techniques | Year: 2016

Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method


In this paper the beneficial impacts of high-resolution (in space and time) wind and temperature observations from aircraft on very short-range numerical weather forecasting are presented. The observations are retrieved using the tracking and ranging radar from the air traffic control facility at Schiphol Airport, Amsterdam, the Netherlands. This enhanced surveillance radar tracks all aircraft in sight every 4 s, generating one million wind and temperature observations per day in a radius of 270 km around the radar. Nowcasting applications will benefit from improved three-dimensional wind fields. When these observations are assimilated into a numerical model with an hourly update cycle, the short-range three-dimensional wind field forecasts match the observations better than those from an operational forecast cycle, which is updated every 3 h. The positive impact on wind in the first hours of the forecast gradually turns into a neutral impact, when compared to other wind and temperature observations. The timeliness of the forecasts combined with the high resolution of the observations are the main reasons for the observed nowcasting benefits. All in all, the assimilation of high-resolution wind (and temperature) observations is found to be beneficial for nowcasting and short-range forecasts up to 2-3 h. © 2012 American Meteorological Society.


Wind climate in Northwest Europe is subject to long-term persistence (LTP), also called the Hurst phenomenon. Ignorance of LTP causes underestimation of climatic variability. The quantification of multi-year variability is important for the assessment of the uncertainty of future multi-year wind yields. Estimating LTP requires long homogeneous time series. Such series of wind observations are rare, but annual mean geostrophic wind speed (U) can be used instead. This study demonstrates a method to estimate the 10-year aggregated mean U for the near and the far future and its uncertainty in Northwest Europe. Time series of U were derived from daily sea level pressure from the European Climate Assessment Dataset. Minor inhomogeneities cannot be ruled out, but were shown to hardly affect the estimated Hurst exponent (Ĥ). A maximum likelihood method was adjusted to remove the biases in (Ĥ). The geostrophic wind speed over the North Sea, the British Isles and along the Scandinavian coast are characterised by statistically significant H between 0.58 and 0.74, (H = 0.5 implies no LTP). The additional affect of the parameter uncertainty is estimated in a Bayesian way and is highly dependent on the record length. The assessment of structural changes in future wind fields requires general circulation models. An ensemble of seventeen simulations (ESSENCE) with one single climate model (ECHAM5/MPI-OM) was used to evaluate structural trends and LTP. The estimated Ĥ in the ESSENCE simulations are generally close to 0.5 and not significant. Significant trends in U are found over large parts of the investigated domain, but the trends are small compared to the multi-year variability. Large decreasing trends are found in the vicinity of Iceland and increasing trends near the Greenland coast. This is likely related to the sea ice retreat within the ESSENCE simulations and the associated change in surface temperature gradients. © 2011 The Author(s).


Coastal precipitation in the Netherlands for present-day and future climate using observations and simulations with the KNMI regional climate model RACMO2 is studied. Overall a good correspondence between RACMO2 and the observations is found for the present-day climate. Yet, the model generally simulates too sharp gradients in precipitation amounts near the coast line, and the maximum in precipitation is shifted towards the sea compared to the observations. The seasonal cycle of coastal precipitation (here the difference between precipitation near the coast and inland) is well linked to the land sea temperature contrast. Yet, this is not a one-to-one relation and other effects (like atmospheric stability) are likely to be important as well. Variations from year to year in monthly coastal precipitation correlate well with variations in strength of the cyclonic westerly circulation in the periods August to November. A moderate sensitivity to sea surface temperatures (≈6 mm month-1 K-1) has also been found. In simulations of climate change a small increase in coastal precipitation is generally found, except for the spring period. Using a more realistic prescription of North Sea temperatures (from a slab ocean model) only marginally changes these results. The increase in coastal precipitation is tempered by a decrease in land sea temperature contrast, and considerably larger increases of coastal precipitation could be expected if the temperature rise over sea equalled the temperature rise over land. Despite the fact that the overall changes in coastal effect appear relatively moderate, impact on more extreme events could be considerable, increasing the probability of exceeding extreme thresholds by a factor two or more. © 2013 Springer-Verlag Berlin Heidelberg.


Van Der A R.J.,KNMI | Allaart M.A.F.,KNMI | Eskes H.J.,KNMI
Atmospheric Chemistry and Physics | Year: 2010

A single coherent total ozone dataset, called the Multi Sensor Reanalysis (MSR), has been created from all available ozone column data measured by polar orbiting satellites in the near-ultraviolet Huggins band in the last thirty years. Fourteen total ozone satellite retrieval datasets from the instruments TOMS (on the satellites Nimbus-7 and Earth Probe), SBUV (Nimbus-7, NOAA-9, NOAA-11 and NOAA-16), GOME (ERS-2), SCIAMACHY (Envisat), OMI (EOS-Aura), and GOME-2 (Metop-A) have been used in the MSR. As first step a bias correction scheme is applied to all satellite observations, based on independent ground-based total ozone data from the World Ozone and Ultraviolet Data Center. The correction is a function of solar zenith angle, viewing angle, time (trend), and effective ozone temperature. As second step data assimilation was applied to create a global dataset of total ozone analyses. The data assimilation method is a sub-optimal implementation of the Kalman filter technique, and is based on a chemical transport model driven by ECMWF meteorological fields. The chemical transport model provides a detailed description of (stratospheric) transport and uses parameterisations for gas-phase and ozone hole chemistry. The MSR dataset results from a 30-year data assimilation run with the 14 corrected satellite datasets as input, and is available on a grid of 1×11/2° with a sample frequency of 6 h for the complete time period (1978-2008). The Observation-minus-Analysis (OmA) statistics show that the bias of the MSR analyses is less than 1% with an RMS standard deviation of about 2% as compared to the corrected satellite observations used. © 2010 Author(s).

Loading KNMI collaborators
Loading KNMI collaborators