KNMI

Netherlands

KNMI

Netherlands
SEARCH FILTERS
Time filter
Source Type

News Article | December 21, 2016
Site: www.theenergycollective.com

News Article | March 2, 2017
Site: www.realclimate.org

Gavin has already discussed John Christy’s misleading graph earlier in 2016, however, since the end of 2016, there has been a surge in interest in this graph in Norway amongst people who try to diminish the role of anthropogenic global warming. I think this graph is warranted some extra comments in addition to Gavin’s points because it is flawed on more counts beyond those that he has already discussed. In fact, those using this graph to judge climate models reveal an elementary lack of understanding of climate data. Different types of numbers The upper left panel in Fig. 1 shows that Christy compared the average of 102 climate model simulations with temperature from satellite measurements (average of three different analyses) and weather balloons (average of two analyses). This is a flawed comparison because it compares a statistical parameter with a variable. A parameter, such as the mean (also referred to as the ‘average’) and the standard deviation, describe the statistical distribution of a given variable. However, such parameters are not equivalent to the variable they describe. The comparison between the average of model runs and observations is surprising, because it is clearly incorrect from elementary statistics (This is similar statistics-confusion as the flaw found in the Douglass et al. (2007)). I can illustrate this with an example: Fig. 2 shows 108 different model simulations of the global mean temperature (from the CMIP5 experiment). The thick black line shows the average of all the model runs (the ‘multi-model ensemble’). None of the individual runs (coloured thin curves) match the mean (thick black curve), and if I were to use the same logic as Christy, I could incorrectly claim that the average is inconsistent with the individual runs because of their different characters. But the average is based on all these individual runs. Hence, this type of logic is obviously flawed. To be fair, the observations shown in Cristy’s graph were also based on averages, although of a small set of analyses. This does not improve the case because all the satellite data are based on the same measurements and only differ in terms of synthesis and choices made in the analyses (they are highly correlated, as we will see later on). By the way, one of the curves shown in Fig. 2 is observations. Can you see which? Eyeballing such curves, however, is not the proper way to compare different data, and there are numerous statistical tests to do so properly. Different physical aspects Christy compared temperatures estimated for the troposphere (satellites and balloons) temperature computed by global climate models. Th is is a fact because th e data portal where he obtained the model results was the KNMI ClimateExplorer. ClimateExplorer does not hold upper air temperature stored as 3D-fields (I checked this with Geert Jan van der Oldenborgh)(correction: ‘taz’ is zonally integrated temperature as a function of height but does not take into account the differences between land and sea. Nevertheless, this variable still does not really correspond closely with those measured from satellites) A proper comparison between the satellite temperature and the model results needs to estimate a weighted average of the temperature over the troposphere and lower stratosphere with an appropriate altitude-dependent weighting. The difference between the near-surface and tropospheric temperature matters as the stratosphere has cooled in contrast to the warming surface. Temperature from satellites are also model results It is fair to compare the satellite record with model results to explore uncertainties, but the satellite data is not the ground truth and cannot be used to invalidate the models. The microwave sounding unit (MSU), the instrument used to measure the temperature, measures light in certain wavelength bands emitted by oxygen molecules. An algorithm is then used to compute the air temperature consistent with the measured irradiance. This algorithm is a model based on the same physics as the models which predict that higher concentrations of CO result in higher surface temperatures. I wonder if Christy sees the irony in his use of satellite temperatures to dispute the effect of CO on the global mean temperature. It is nevertheless reassuring to see a good match between the balloon and satellite data, which suggests that the representation of the physics in both the satellite retrieval algorithm and the climate models are more or less correct. How to compare the models with observations The two graphs (courtesy of Gavin) below show comparisons between tropospheric mean temperatures (TMT) that are comparable to the satellite data and include confidence interval for the ensemble rather than just the ensemble mean. This type of comparisons is more consistent with standard statistical tests such as the students t-test. The graphs also show several satellite-based analyses: the Remote Sensing Systems (RSS; different versions), University of Alabama in Huntsville (UAH; Different versions), and NOAA (STAR). All these curves are so similar (highly correlated) that taking the average doesn’t make much difference. According to Fig. 3, the tropospheric temperature simulated by the global climate models (from the CMIP5 experiment) increased slightly faster than the temperatures derived from the satellite measurements between 2000 and 2015, but they were not very different. The RSS temperatures gave the closest match with the global climate models. Fig. 4 shows a trend analysis for the 1979-2016 interval where the satellite-based temperature trends are shown with appropriate error bars. The trends from the satellite analyses and the model results overlap if the confidence limits are taken into consideration. The story behind the upper tropospheric warming The biggest weight of the troposphere temperature trends come from the tropics because it accounts for the largest volume (half of the Earth’s surface area lies between 30°S and 30°N due to its geometric shape), and they are therefore sensitive to conditions around the equator. This is also where large-scale convection takes place that produce bands of high clouds (the Inter-Tropical Convergence Zone – ITCZ). Cloud formation through convection and condensation is associated with release of latent heat and influences the temperatures (e.g. Vecchi et al., 2006). It is part of the hydrological cycle, and a slow change in the atmospheric overturning, moisture and circulation patterns is expected to have a bearing on the global tropospheric temperature trend estimates. This means that the picture is complex when it comes to the global tropospheric temperature trends because many physical processes have an influence that take place on a wide range of spatial scales. Hard evidence of misrepresentation Despite the complicated nature of tropospheric temperatures, it is an indisputable fact that Christy’s graph presents numbers with different meanings as if they were equivalent. It is really surprising to see such a basic misrepresentation in a testimony at the U.S. House Committee on Science, Space & Technology. One of the most elementary parts of science is to know what the numbers really represent and how they should be interpreted.

Van Der Schrier G.,KNMI | Van Oldenborgh G.J.,KNMI
Climate of the Past | Year: 2011

The Central Netherlands Temperature (CNT) is a monthly daily mean temperature series constructed from homogenized time series from the centre of the Netherlands. The purpose of this series is to offer a homogeneous time series representative of a larger area in order to study large-scale temperature changes. It will also facilitate a comparison with climate models, which resolve similar scales. From 1906 onwards, temperature measurements in the Netherlands have been sufficiently standardized to construct a high-quality series. Long time series have been constructed by merging nearby stations and using the overlap to calibrate the differences. These long time series and a few time series of only a few decades in length have been subjected to a homogeneity analysis in which significant breaks and artificial trends have been corrected. Many of the detected breaks correspond to changes in the observations that are documented in the station metadata. This version of the CNT, to which we attach the version number 1.1, is constructed as the unweighted average of four stations (De Bilt, Winterswijk/Hupsel, Oudenbosch/Gilze-Rijen and Gemert/Volkel) with the stations Eindhoven and Deelen added from 1951 and 1958 onwards, respectively. The global gridded datasets used for detecting and attributing climate change are based on raw observational data. Although some homogeneity adjustments are made, these are not based on knowledge of local circumstances but only on statistical evidence. Despite this handicap, and the fact that these datasets use grid boxes that are far larger then the area associated with that of the Central Netherlands Temperature, the temperature interpolated to the CNT region shows a warming trend that is broadly consistent with the CNT trend in all of these datasets. The actual trends differ from the CNT trend up to 30 %, which highlights the need to base future global gridded temperature datasets on homogenized time series. © Author(s) 2011.

Katsman C.A.,KNMI | Van Oldenborgh G.J.,KNMI
Geophysical Research Letters | Year: 2011

Over the period 2003-2010, the upper ocean has not gained any heat, despite the general expectation that the ocean will absorb most of the Earth's current radiative imbalance. Answering to what extent this heat was transferred to other components of the climate system and by what process(-es) gets to the essence of understanding climate change. Direct heat flux observations are too inaccurate to assess such exchanges. In this study we therefore trace these heat budget variations by analyzing an ensemble of climate model simulations. The analysis reveals that an 8-yr period without upper ocean warming is not exceptional. It is explained by increased radiation to space (45%), largely as a result of El Nio variability on decadal timescales, and by increased ocean warming at larger depths (35%), partly due to a decrease in the strength of the Atlantic meridional overturning circulation. Recently-observed changes in these two large-scale modes of climate variability point to an upcoming resumption of the upward trend in upper ocean heat content. Copyright 2011 by the American Geophysical Union.

Lin J.-T.,Harvard University | McElroy M.B.,Harvard University | Boersma K.F.,KNMI
Atmospheric Chemistry and Physics | Year: 2010

A new methodology is developed to constrain Chinese anthropogenic emissions of nitrogen oxides (NOx) from four major sectors (industry, power plants, mobile and residential) in July 2008. It combines tropospheric NO 2 column retrievals from GOME-2 and OMI, taking advantage of their different passing time over China (∼10:00 a.m. LT (local time) versus ∼02:00 p.m.) and consistent retrieval algorithms. The approach is based on the difference of NOx columns at the overpass times of the two instruments; it thus is less susceptible to the likely systematic errors embedded in individual retrievals that are consistent with each other. Also, it explicitly accounts for diurnal variations and uncertainties of NOx emissions for individual sources. Our best top-down estimate suggests a national budget of 6.8 TgN/yr (5.5 TgN/yr for East China), close to the a priori bottom-up emission estimate from the INTEX-B mission for the year of 2006. The top-down emissions are lower than the a priori near Beijing, in the northeastern provinces and along the east coast; yet they exceed the a priori over many inland regions. Systematic errors in satellite retrievals are estimated to lead to underestimation of top-down emissions by at most 17% (most likely 10%). Effects of other factors on the top-down estimate are typically less than 15% each, including lightning, soil emissions, mixing in planetary boundary layer, anthropogenic emissions of carbon monoxide and volatile organic compounds, magnitude of a priori emissions, assumptions on emission diurnal variations, and uncertainties in the four sectors. The a posteriori emission budget is 5.7 TgN/yr for East China.

De Haan S.,KNMI
Atmospheric Measurement Techniques | Year: 2016

Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method

De Haan S.,KNMI | Stoffelen A.,KNMI
Weather and Forecasting | Year: 2012

In this paper the beneficial impacts of high-resolution (in space and time) wind and temperature observations from aircraft on very short-range numerical weather forecasting are presented. The observations are retrieved using the tracking and ranging radar from the air traffic control facility at Schiphol Airport, Amsterdam, the Netherlands. This enhanced surveillance radar tracks all aircraft in sight every 4 s, generating one million wind and temperature observations per day in a radius of 270 km around the radar. Nowcasting applications will benefit from improved three-dimensional wind fields. When these observations are assimilated into a numerical model with an hourly update cycle, the short-range three-dimensional wind field forecasts match the observations better than those from an operational forecast cycle, which is updated every 3 h. The positive impact on wind in the first hours of the forecast gradually turns into a neutral impact, when compared to other wind and temperature observations. The timeliness of the forecasts combined with the high resolution of the observations are the main reasons for the observed nowcasting benefits. All in all, the assimilation of high-resolution wind (and temperature) observations is found to be beneficial for nowcasting and short-range forecasts up to 2-3 h. © 2012 American Meteorological Society.

Bakker A.M.R.,KNMI | van den Hurk B.J.J.M.,KNMI
Climate Dynamics | Year: 2012

Wind climate in Northwest Europe is subject to long-term persistence (LTP), also called the Hurst phenomenon. Ignorance of LTP causes underestimation of climatic variability. The quantification of multi-year variability is important for the assessment of the uncertainty of future multi-year wind yields. Estimating LTP requires long homogeneous time series. Such series of wind observations are rare, but annual mean geostrophic wind speed (U) can be used instead. This study demonstrates a method to estimate the 10-year aggregated mean U for the near and the far future and its uncertainty in Northwest Europe. Time series of U were derived from daily sea level pressure from the European Climate Assessment Dataset. Minor inhomogeneities cannot be ruled out, but were shown to hardly affect the estimated Hurst exponent (Ĥ). A maximum likelihood method was adjusted to remove the biases in (Ĥ). The geostrophic wind speed over the North Sea, the British Isles and along the Scandinavian coast are characterised by statistically significant H between 0.58 and 0.74, (H = 0.5 implies no LTP). The additional affect of the parameter uncertainty is estimated in a Bayesian way and is highly dependent on the record length. The assessment of structural changes in future wind fields requires general circulation models. An ensemble of seventeen simulations (ESSENCE) with one single climate model (ECHAM5/MPI-OM) was used to evaluate structural trends and LTP. The estimated Ĥ in the ESSENCE simulations are generally close to 0.5 and not significant. Significant trends in U are found over large parts of the investigated domain, but the trends are small compared to the multi-year variability. Large decreasing trends are found in the vicinity of Iceland and increasing trends near the Greenland coast. This is likely related to the sea ice retreat within the ESSENCE simulations and the associated change in surface temperature gradients. © 2011 The Author(s).

Attema J.J.,KNMI | Lenderink G.,KNMI
Climate Dynamics | Year: 2014

Coastal precipitation in the Netherlands for present-day and future climate using observations and simulations with the KNMI regional climate model RACMO2 is studied. Overall a good correspondence between RACMO2 and the observations is found for the present-day climate. Yet, the model generally simulates too sharp gradients in precipitation amounts near the coast line, and the maximum in precipitation is shifted towards the sea compared to the observations. The seasonal cycle of coastal precipitation (here the difference between precipitation near the coast and inland) is well linked to the land sea temperature contrast. Yet, this is not a one-to-one relation and other effects (like atmospheric stability) are likely to be important as well. Variations from year to year in monthly coastal precipitation correlate well with variations in strength of the cyclonic westerly circulation in the periods August to November. A moderate sensitivity to sea surface temperatures (≈6 mm month-1 K-1) has also been found. In simulations of climate change a small increase in coastal precipitation is generally found, except for the spring period. Using a more realistic prescription of North Sea temperatures (from a slab ocean model) only marginally changes these results. The increase in coastal precipitation is tempered by a decrease in land sea temperature contrast, and considerably larger increases of coastal precipitation could be expected if the temperature rise over sea equalled the temperature rise over land. Despite the fact that the overall changes in coastal effect appear relatively moderate, impact on more extreme events could be considerable, increasing the probability of exceeding extreme thresholds by a factor two or more. © 2013 Springer-Verlag Berlin Heidelberg.

Van Der A R.J.,KNMI | Allaart M.A.F.,KNMI | Eskes H.J.,KNMI
Atmospheric Chemistry and Physics | Year: 2010

A single coherent total ozone dataset, called the Multi Sensor Reanalysis (MSR), has been created from all available ozone column data measured by polar orbiting satellites in the near-ultraviolet Huggins band in the last thirty years. Fourteen total ozone satellite retrieval datasets from the instruments TOMS (on the satellites Nimbus-7 and Earth Probe), SBUV (Nimbus-7, NOAA-9, NOAA-11 and NOAA-16), GOME (ERS-2), SCIAMACHY (Envisat), OMI (EOS-Aura), and GOME-2 (Metop-A) have been used in the MSR. As first step a bias correction scheme is applied to all satellite observations, based on independent ground-based total ozone data from the World Ozone and Ultraviolet Data Center. The correction is a function of solar zenith angle, viewing angle, time (trend), and effective ozone temperature. As second step data assimilation was applied to create a global dataset of total ozone analyses. The data assimilation method is a sub-optimal implementation of the Kalman filter technique, and is based on a chemical transport model driven by ECMWF meteorological fields. The chemical transport model provides a detailed description of (stratospheric) transport and uses parameterisations for gas-phase and ozone hole chemistry. The MSR dataset results from a 30-year data assimilation run with the 14 corrected satellite datasets as input, and is available on a grid of 1×11/2° with a sample frequency of 6 h for the complete time period (1978-2008). The Observation-minus-Analysis (OmA) statistics show that the bias of the MSR analyses is less than 1% with an RMS standard deviation of about 2% as compared to the corrected satellite observations used. © 2010 Author(s).