Entity

Time filter

Source Type


Bocquet M.,University Paris Est Creteil | Bocquet M.,French Institute for Research in Computer Science and Automation | Sakov P.,Nansen Environment and Remote Sensing Center
Nonlinear Processes in Geophysics | Year: 2012

The finite-size ensemble Kalman filter (EnKF-N) is an ensemble Kalman filter (EnKF) which, in perfect model condition, does not require inflation because it partially accounts for the ensemble sampling errors. For the Lorenz '63 and '95 toy-models, it was so far shown to perform as well or better than the EnKF with an optimally tuned inflation. The iterative ensemble Kalman filter (IEnKF) is an EnKF which was shown to perform much better than the EnKF in strongly nonlinear conditions, such as with the Lorenz '63 and '95 models, at the cost of iteratively updating the trajectories of the ensemble members. This article aims at further exploring the two filters and at combining both into an EnKF that does not require inflation in perfect model condition, and which is as efficient as the IEnKF in very nonlinear conditions. In this study, EnKF-N is first introduced and a new implementation is developed. It decomposes EnKF-N into a cheap two-step algorithm that amounts to computing an optimal inflation factor. This offers a justification of the use of the inflation technique in the traditional EnKF and why it can often be efficient. Secondly, the IEnKF is introduced following a new implementation based on the Levenberg-Marquardt optimisation algorithm. Then, the two approaches are combined to obtain the finite-size iterative ensemble Kalman filter (IEnKF-N). Several numerical experiments are performed on IEnKF-N with the Lorenz '95 model. These experiments demonstrate its numerical efficiency as well as its performance that offer, at least, the best of both filters. We have also selected a demanding case based on the Lorenz '63 model that points to ways to improve the finite-size ensemble Kalman filters. Eventually, IEnKF-N could be seen as the first brick of an efficient ensemble Kalman smoother for strongly nonlinear systems. © 2012 Author(s). CC Attribution 3.0 License. Source


Bennetts L.G.,University of Adelaide | Williams T.D.,Nansen Environment and Remote Sensing Center
Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences | Year: 2015

An experimental validation of theoretical models of regular-water-wave transmission through arrays of floating discs is presented. The experiments were conducted in a wave basin. The models are based on combined potential-flow and thin-plate theories, and the assumption of linear motions. A low-concentration array, in which discs are separated by approximately a disc diameter in equilibrium, and a high-concentration array, in which adjacent discs are almost touching in equilibrium, were used for the experiments. The proportion of incident-wave energy transmitted by the discs is presented as a function of wave period, and for different wave amplitudes. Results indicate the models predict wave-energy transmission accurately for small-amplitude waves and low-concentration arrays. Discrepancies for large-amplitude waves and high-concentration arrays are attributed to wave overwash of the discs and collisions between discs. Validation of model predictions of a solitary disc's rigid-body motions are also presented. © The Author(s) Published by the Royal Society. All rights reserved. Source


Srinivasan A.,University of Miami | Chassignet E.P.,Florida State University | Bertino L.,Nansen Environment and Remote Sensing Center | Brankart J.M.,Laboratoire des Ecoulements Geophysiques et Industriels | And 8 more authors.
Ocean Modelling | Year: 2011

We assess and compare four sequential data assimilation methods developed for HYCOM in an identical twin experiment framework. The methods considered are Multi-variate Optimal Interpolation (MVOI), Ensemble Optimal Interpolation (EnOI), the fixed basis version of the Singular Evolutive Extended Kalman Filter (SEEK) and the Ensemble Reduced Order Information Filter (EnROIF). All methods can be classified as statistical interpolation but differ mainly in how the forecast error covariances are modeled. Surface elevation and temperature data sampled from an 1/12° Gulf of Mexico HYCOM simulation designated as the truth are assimilated into an identical model starting from an erroneous initial state, and convergence of assimilative runs towards the truth is tracked. Sensitivity experiments are first performed to evaluate the impact of practical implementation choices such as the state vector structure, initialization procedures, correlation scales, covariance rank and details of handling multivariate datasets, and to identify an effective configuration for each assimilation method. The performance of the methods are then compared by examining the relative convergence of the assimilative runs towards the truth. All four methods show good skill and are able to enhance consistency between the assimilative and truth runs in both observed and unobserved model variables. Prediction errors in observed variables are typically less than the errors specified for the observations, and the differences between the assimilated products are small compared to the observation errors. For unobserved variables, RMS errors are reduced by 50% relative to a non-assimilative run and differ between schemes on average by about 5%. Dynamical consistency between the updated state space variables in the data assimilation algorithm, and the data adequately sampling significant dynamical features are the two crucial components for reliable predictions. The experiments presented here suggest that practical implementation details can have at least as much an impact on the accuracy of the assimilated product as the choice of assimilation technique itself. We also present a discussion of the numerical implementation and the computational requirements for the use of these methods in large scale applications. © 2011 Elsevier Ltd. Source


Craigmile P.F.,Ohio State University | Craigmile P.F.,University of Glasgow | Guttorp P.,University of Washington | Guttorp P.,Norwegian Computing Center | And 7 more authors.
Journal of Geophysical Research: Atmospheres | Year: 2014

A recent observation in NOAA's National Climatic Data Center's monthly assessment of the state of the climate was that contiguous U.S. average monthly temperatures were in the top third of monthly ranked historical temperatures for 13 straight months from June 2011 to June 2012. The chance of such a streak occurring randomly was quoted as (1/3)13, or about one in 1.6 million. The streak continued for three more months before the October 2012 value dropped below the upper tercile. The climate system displays a degree of persistence that increases this probability relative to the assumption of independence. This paper puts forth different statistical techniques that more accurately quantify the probability of this and other such streaks. We consider how much more likely streaks are when an underlying warming trend is accounted for in the record, the chance of streaks occurring anywhere in the record, and the distribution of the record's longest streak. Key Points Estimating the probability of upper tercile events in the U.S. temperature record The probability of the event depends on the statistical model fitted We consider statistical models with trend, seasonality, and serial dependence ©2014. American Geophysical Union. All Rights Reserved. Source


Williams T.D.,Nansen Environment and Remote Sensing Center | Bennetts L.G.,University of Adelaide | Squire V.A.,University of Otago | Dumont D.,Institute des Sciences de la Mer de Rimouski
Society of Petroleum Engineers - Arctic Technology Conference 2012 | Year: 2012

We present numerical results arising from a parameterization of wave-ice interactions in a two-dimensional ice-ocean model of the Fram Strait (HYCOM: HYbrid Coordinate Ocean Model). The model takes wave predictions/hindcasts from the WAM wave model and these waves are advected into the ice, breaking it as they go. They in turn are attenuated by the ice using the model of Bennetts and Squire (2012). We use a truncated power law for the floe size distribution, following the observations of Toyota et al. (2011). The maximum floe size is determined by the dominant wavelength in the ice field. The maximum value increases with distance from the ice edge as shorter waves are attenuated more strongly than long ones. At some distance from the ice edge, breaking is no longer able to occur, and this marks the end of the Marginal Ice Zone (MIZ). Consequently, we now have a model that predicts the expected floe size and wave intensity at any point in the ice, something that current wave models are unable to do at present, and which is a notable weakness. Recognizing that a combination of large waves and ice can be extremely hazardous, Arctic operators who need to know both wave and ice conditions in ice-infested areas will use the model as a forecasting tool when it is fully operational. Copyright 2012, Offshore Technology Conference. Source

Discover hidden collaborations