Time filter

Source Type

Imbers J.,University of Oxford | Lopez A.,Center for the Analysis of Time Series | Huntingford C.,UK Center for Ecology and Hydrology | Allen M.R.,University of Oxford
Journal of Geophysical Research: Atmospheres | Year: 2013

This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved. Source

Siegert S.,Max Planck Institute For Physik Komplexer Systeme | Brocker J.,Max Planck Institute For Physik Komplexer Systeme | Brocker J.,Center for the Analysis of Time Series | Kantz H.,Max Planck Institute For Physik Komplexer Systeme
Quarterly Journal of the Royal Meteorological Society | Year: 2011

An ensemble forecast is a collection of runs of a numerical dynamical model, initialized with perturbed initial conditions. In modern weather prediction for example, ensembles are used to retrieve probabilistic information about future weather conditions. In this contribution, we are concerned with ensemble forecasts of a scalar quantity (say, the temperature at a specific location). We consider the event that the verification is smaller than the smallest, or larger than the largest ensemble member. We call these events outliers. If a K-member ensemble accurately reflected the variability of the verification, outliers should occur with a base rate of 2/(K + 1). In operational forecast ensembles though, this frequency is often found to be higher. We study the predictability of outliers and find that, exploiting information available from the ensemble, forecast probabilities for outlier events can be calculated which are more skilful than the unconditional base rate. We prove this analytically for statistically consistent forecast ensembles. Further, the analytical results are compared to the predictability of outliers in an operational forecast ensemble by means of model output statistics. We find the analytical and empirical results to agree both qualitatively and quantitatively. © 2011 Royal Meteorological Society. Source

Imbers J.,University of Oxford | Lopez A.,Center for the Analysis of Time Series | Huntingford C.,UK Center for Ecology and Hydrology | Allen M.,University of Oxford
Journal of Climate | Year: 2014

The Intergovernmental Panel on Climate Change's (IPCC) "very likely" statement that anthropogenic emissions are affecting climate is based on a statistical detection and attribution methodology that strongly depends on the characterization of internal climate variability. In this paper, the authors test the robustness of this statement in the case of global mean surface air temperature, under different representations of such variability. The contributions of the different natural and anthropogenic forcings to the global mean surface air temperature response are computed using a box diffusion model. Representations of internal climate variability are explored using simple stochastic models that nevertheless span a representative range of plausible temporal autocorrelation structures, including the short-memory first-order autoregressive [AR(1)] process and the long-memory fractionally differencing process. The authors find that, independently of the representation chosen, the greenhouse gas signal remains statistically significant under the detection model employed in this paper. The results support the robustness of the IPCC detection and attribution statement for global mean temperature change under different characterizations of internal variability, but they also suggest that a wider variety of robustness tests, other than simple comparisons of residual variance, should be performed when dealing with other climate variables and/or different spatial scales. © 2014 American Meteorological Society. Source

Beven K.,Lancaster University | Beven K.,Uppsala University | Beven K.,Center for the Analysis of Time Series | Smith P.J.,Lancaster University | Wood A.,Lancaster University
Hydrology and Earth System Sciences | Year: 2011

Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model. © Author(s) 2011. Source

Beven K.,Lancaster University | Beven K.,Uppsala University | Beven K.,Center for the Analysis of Time Series | Beven K.,Ecole Polytechnique Federale de Lausanne
Comptes Rendus - Geoscience | Year: 2012

The environmental modeller faces a dilemma. Science often demands that more and more process representations are incorporated into models (particularly to avoid the possibility of making missing process errors in predicting future response). Testing the causal representations in environmental models (as multiple working hypotheses about the functioning of environmental systems) then depends on specifying boundary conditions and model parameters adequately. This will always be difficult in applications to a real system because of the heterogeneities, non-stationarities, complexities and epistemic uncertainties inherent in environmental prediction. Thus, it can be difficult to define the information content of a data set used in model evaluation and any consequent measures of belief or verisimilitude. A limit of acceptability approach to model evaluation is suggested as a way of testing models, implying that thought is required to define critical experiments that will allow models as hypotheses to be adequately differentiated. © 2012. Source

Discover hidden collaborations