Center for the Analysis of Time Series

Bodle Street, United Kingdom

Center for the Analysis of Time Series

Bodle Street, United Kingdom

Time filter

Source Type

Imbers J.,University of Oxford | Lopez A.,Center for the Analysis of Time Series | Huntingford C.,UK Center for Ecology and Hydrology | Allen M.R.,University of Oxford
Journal of Geophysical Research: Atmospheres | Year: 2013

This paper aims to test the robustness of the detection and attribution of anthropogenic climate change using four different empirical models that were previously developed to explain the observed global mean temperature changes over the last few decades. These studies postulated that the main drivers of these changes included not only the usual natural forcings, such as solar and volcanic, and anthropogenic forcings, such as greenhouse gases and sulfates, but also other known Earth system oscillations such as El Niño Southern Oscillation (ENSO) or the Atlantic Multidecadal Oscillation (AMO). In this paper, we consider these signals, or forced responses, and test whether or not the anthropogenic signal can be robustly detected under different assumptions for the internal variability of the climate system. We assume that the internal variability of the global mean surface temperature can be described by simple stochastic models that explore a wide range of plausible temporal autocorrelations, ranging from short memory processes exemplified by an AR(1) model to long memory processes, represented by a fractional differenced model. In all instances, we conclude that human-induced changes to atmospheric gas composition is affecting global mean surface temperature changes. ©2013. American Geophysical Union. All Rights Reserved.


Suckling E.B.,Center for the Analysis of Time Series | Smith L.A.,Center for the Analysis of Time Series
Journal of Climate | Year: 2013

While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a "dynamic climatology" empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model-model intercomparisons. © 2013 American Meteorological Society.


Beven K.,Lancaster University | Beven K.,Uppsala University | Beven K.,Center for the Analysis of Time Series | Smith P.J.,Lancaster University | Wood A.,Lancaster University
Hydrology and Earth System Sciences | Year: 2011

Disinformation as a result of epistemic error is an issue in hydrological modelling. In particular the way in which the colour in model residuals resulting from epistemic errors should be expected to be non-stationary means that it is difficult to justify the spin that the structure of residuals can be properly represented by statistical likelihood functions. To do so would be to greatly overestimate the information content in a set of calibration data and increase the possibility of both Type I and Type II errors. Some principles of trying to identify periods of disinformative data prior to evaluation of a model structure of interest, are discussed. An example demonstrates the effect on the estimated parameter values of a hydrological model. © Author(s) 2011.


Smith L.A.,Center for the Analysis of Time Series | Cuellar M.C.,Center for the Analysis of Time Series | Du H.,Center for the Analysis of Time Series | Judd K.,Center for the Analysis of Time Series
Physics Letters, Section A: General, Atomic and Solid State Physics | Year: 2010

Parameter estimation in nonlinear models is a common task, and one for which there is no general solution at present. In the case of linear models, the distribution of forecast errors provides a reliable guide to parameter estimation, but in nonlinear models the facts that predictability may vary with location in state space, and that the distribution of forecast errors is expected not to be Normal, means that parameter estimation based on least squares methods will result in systematic errors. A new approach to parameter estimation is presented which focuses on the geometry of trajectories of the model rather than the distribution of distances between model forecast and the observation at a given lead time. Specifically, we test a number of candidate trajectories to determine the duration for which they can shadow the observations, rather than evaluating a forecast error statistic at any specific lead time(s). This yields insights into both the parameters of the dynamical model and those of the observational noise model. The advances reported here are made possible by extracting more information from the dynamical equations, and thus improving the balance between information gleaned from the structural form of the equations and that from the observations. The technique is illustrated for both flows and maps, applied in 2-, 3-, and 8-dimensional dynamical systems, and shown to be effective in a case of incomplete observation where some components of the state are not observed at all. While the demonstration of effectiveness is strong, there remain fundamental challenges in the problem of estimating model parameters when the system that generated the observations is not a member of the model class. Parameter estimation appears ill defined in this case. © 2010 Elsevier B.V. All rights reserved.


Siegert S.,Max Planck Institute For Physik Komplexer Systeme | Brocker J.,Max Planck Institute For Physik Komplexer Systeme | Brocker J.,Center for the Analysis of Time Series | Kantz H.,Max Planck Institute For Physik Komplexer Systeme
Quarterly Journal of the Royal Meteorological Society | Year: 2011

An ensemble forecast is a collection of runs of a numerical dynamical model, initialized with perturbed initial conditions. In modern weather prediction for example, ensembles are used to retrieve probabilistic information about future weather conditions. In this contribution, we are concerned with ensemble forecasts of a scalar quantity (say, the temperature at a specific location). We consider the event that the verification is smaller than the smallest, or larger than the largest ensemble member. We call these events outliers. If a K-member ensemble accurately reflected the variability of the verification, outliers should occur with a base rate of 2/(K + 1). In operational forecast ensembles though, this frequency is often found to be higher. We study the predictability of outliers and find that, exploiting information available from the ensemble, forecast probabilities for outlier events can be calculated which are more skilful than the unconditional base rate. We prove this analytically for statistically consistent forecast ensembles. Further, the analytical results are compared to the predictability of outliers in an operational forecast ensemble by means of model output statistics. We find the analytical and empirical results to agree both qualitatively and quantitatively. © 2011 Royal Meteorological Society.


Imbers J.,University of Oxford | Lopez A.,Center for the Analysis of Time Series | Huntingford C.,UK Center for Ecology and Hydrology | Allen M.,University of Oxford
Journal of Climate | Year: 2014

The Intergovernmental Panel on Climate Change's (IPCC) "very likely" statement that anthropogenic emissions are affecting climate is based on a statistical detection and attribution methodology that strongly depends on the characterization of internal climate variability. In this paper, the authors test the robustness of this statement in the case of global mean surface air temperature, under different representations of such variability. The contributions of the different natural and anthropogenic forcings to the global mean surface air temperature response are computed using a box diffusion model. Representations of internal climate variability are explored using simple stochastic models that nevertheless span a representative range of plausible temporal autocorrelation structures, including the short-memory first-order autoregressive [AR(1)] process and the long-memory fractionally differencing process. The authors find that, independently of the representation chosen, the greenhouse gas signal remains statistically significant under the detection model employed in this paper. The results support the robustness of the IPCC detection and attribution statement for global mean temperature change under different characterizations of internal variability, but they also suggest that a wider variety of robustness tests, other than simple comparisons of residual variance, should be performed when dealing with other climate variables and/or different spatial scales. © 2014 American Meteorological Society.


Du H.,Center for the Analysis of Time Series | Smith L.A.,Center for the Analysis of Time Series | Smith L.A.,Pembroke College
Physical Review E - Statistical, Nonlinear, and Soft Matter Physics | Year: 2012

Dynamical modeling lies at the heart of our understanding of physical systems. Its role in science is deeper than mere operational forecasting, in that it allows us to evaluate the adequacy of the mathematical structure of our models. Despite the importance of model parameters, there is no general method of parameter estimation outside linear systems. A relatively simple method of parameter estimation for nonlinear systems is introduced, based on variations in the accuracy of probability forecasts. It is illustrated on the logistic map, the Henon map, and the 12-dimensional Lorenz96 flow, and its ability to outperform linear least squares in these systems is explored at various noise levels and sampling rates. As expected, it is more effective when the forecast error distributions are non-Gaussian. The method selects parameter values by minimizing a proper, local skill score for continuous probability forecasts as a function of the parameter values. This approach is easier to implement in practice than alternative nonlinear methods based on the geometry of attractors or the ability of the model to shadow the observations. Direct measures of inadequacy in the model, the "implied ignorance," and the information deficit are introduced. © 2012 American Physical Society.


Smith L.A.,Center for the Analysis of Time Series | Smith L.A.,Center for Climate Change Economics and Policy | Smith L.A.,Pembroke College | Stern N.,Center for Climate Change Economics and Policy
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2011

Policy-making is usually about risk management. Thus, the handling of uncertainty in science is central to its support of sound policy-making. There is value in scientists engaging in a deep conversation with policy-makers and others, not merely 'delivering' results or analyses and then playing no further role. Communicating the policy relevance of different varieties of uncertainty, including imprecision, ambiguity, intractability and indeterminism, is an important part of this conversation. Uncertainty is handled better when scientists engage with policy-makers. Climate policy aims both to alter future risks (particularly via mitigation) and to take account of and respond to relevant remaining risks (via adaptation) in the complex causal chain that begins and ends with individuals. Policy-making profits from learning how to shift the distribution of risks towards less dangerous impacts, even if the probability of events remains uncertain. Immediate value lies not only in communicating how risks may change with time and how those risks may be changed by action, but also in projecting how our understanding of those risks may improve with time (via science) and how our ability to influence them may advance (via technology and policy design). Guidance on the most urgent places to gather information and realistic estimates of when to expect more informative answers is of immediate value, as are plausible estimates of the risk of delaying action. Risk assessment requires grappling with probability and ambiguity (uncertainty in the Knightian sense) and assessing the ethical, logical, philosophical and economic underpinnings of whether a target of '50 per cent chance of remaining under +2°C' is either 'right' or 'safe'. How do we better stimulate advances in the difficult analytical and philosophical questions while maintaining foundational scientific work advancing our understanding of the phenomena? And provide immediate help with decisions that must be made now? This journal is © 2011 The Royal Society.


Beven K.,Lancaster University | Beven K.,Uppsala University | Beven K.,Center for the Analysis of Time Series | Beven K.,Ecole Polytechnique Federale de Lausanne
Comptes Rendus - Geoscience | Year: 2012

The environmental modeller faces a dilemma. Science often demands that more and more process representations are incorporated into models (particularly to avoid the possibility of making missing process errors in predicting future response). Testing the causal representations in environmental models (as multiple working hypotheses about the functioning of environmental systems) then depends on specifying boundary conditions and model parameters adequately. This will always be difficult in applications to a real system because of the heterogeneities, non-stationarities, complexities and epistemic uncertainties inherent in environmental prediction. Thus, it can be difficult to define the information content of a data set used in model evaluation and any consequent measures of belief or verisimilitude. A limit of acceptability approach to model evaluation is suggested as a way of testing models, implying that thought is required to define critical experiments that will allow models as hypotheses to be adequately differentiated. © 2012.


Lopez A.,CCCEP | Lopez A.,Center for the Analysis of Time Series | Suckling E.B.,Center for the Analysis of Time Series | Smith L.A.,Center for the Analysis of Time Series
Climatic Change | Year: 2013

Pattern scaling offers the promise of exploring spatial details of the climate system response to anthropogenic climate forcings without their full simulation by state-of-the-art Global Climate Models. The circumstances in which pattern scaling methods are capable of delivering on this promise are explored by quantifying its performance in an idealized setting. Given a large ensemble that is assumed to sample the full range of variability and provide quantitative decision-relevant information, the soundness of applying the pattern scaling methodology to generate decision relevant climate scenarios is explored. Pattern scaling is not expected to reproduce its target exactly, of course, and its generic limitations have been well documented since it was first proposed. In this work, using as a particular example the quantification of the risk of heat waves in Southern Europe, it is shown that the magnitude of the error in the pattern scaled estimates can be significant enough to disqualify the use of this approach in quantitative decision-support. This suggests that future application of pattern scaling in climate science should provide decision makers not just a restatement of the assumptions made, but also evidence that the methodology is adequate for purpose in practice for the case under consideration. © 2013 Springer Science+Business Media Dordrecht.

Loading Center for the Analysis of Time Series collaborators
Loading Center for the Analysis of Time Series collaborators