Versailles, France
Versailles, France

Time filter

Source Type

Guilyardi E.,University of Reading | Lawrence B.,University of Reading | Callaghan S.,Rutherford Appleton Laboratory | Deluca C.,National Oceanic and Atmospheric Administration | And 5 more authors.
Bulletin of the American Meteorological Society | Year: 2013

A project named the Coupled Model Intercomparison Project (CMIP5), that seeks to collect better documentation of climate models and their simulations is reviewed. The (CMIP5), organized by the Working Group on Coupled Modelling (WGCM) on behalf of WMO's World Climate Research Program (WCRP), will generate more than a million individual datasets and several petabytes of data. In early planning stages of CMIP5, the climate modeling community was committed to collecting a comprehensive and standardized set of metadata for the climate model simulations. The various types of metadata of interest were then organized into a new conceptual model, called the CIM (Common Information Model). This conceptual model was applied to the specific needs of CMIP5, and a metadata entry tool was developed to collect the information. Data portals can harvest the information contained in the resulting machine-readable files and render it in a form more usable to humans.

News Article | April 6, 2016

We systematically searched the scientific literature for proxy records of hydroclimate and temperature. We retrieved all records fulfilling the following criteria: (1) Temporal coverage. All proxy records must cover at least the period 1000–1899. (2) Temporal resolution. All proxies must have at least, on average, two data points per century between 1000 and 1899 and no more than two centuries over the 1000–1899 period may have fewer. In the data subset containing only hydroclimate records resolved decadally or better, we required that all proxies have, on average, one data point per century between 1000 and 1899. (3) Publication requirements. We use only proxy records that have been published in the scientific literature. (4) Requirements regarding the climate signal. The proxies must, according to their published description, contain a hydrological signal described as precipitation, drought, moisture balance, stream flow, lake level change, bog surface wetness, flooding or similar hydrological information. (5) Geographical requirements. All the proxies must be located in, or nearly in, the Northern Hemisphere. Proxies just south of the Equator (by a few degrees) may be included if they contribute to gridded values north of the Equator. The majority of records were obtained from public repositories (for example, and Those records not in the public domain were acquired either by direct request from their authors, or digitized from figures in the original articles (Supplementary Tables 1 and 2). The proxy data were divided into seven categories: (1) ice cores, (2) peat cores, (3) tree rings, (4) speleothems, (5) documentary, (6) lake sediments, and (7) marine sediments. In cases where there exist several versions of a proxy record from the same site, preference was given to the latest published version. The collected proxy records were divided into three simplified categories of seasonal response: annual, winter and summer, with spring and early autumn considered as part of the summer season. In cases when no clear information about a proxy’s seasonality could be found, either in the original article or in subsequent publications, we assumed the seasonality to be annual. We only included proxy records with age models constrained by at least one dating point after 1900 (D1), another between 1000 and 1900 (D2) and one dating point before 1000 (D3). The number of years between D1 and D2 and D2 and D3 should be less than 1,000 years. (1) The top of the core exists (that is, year of collection) or, if it does not, there must be a dating point (for example, 137Cs, 210Pb) to verify the age (A1 is the top age). (2) There should be at least one dating point (14C, U/Th, tephra, 210Pb) between 800 and 1999 (A2). (3) There should be at least one dating point between 800 and 1 (A3). (4) The spacing between these three dating points should be less than 800 years (A1 − A2 < 800 years, A2 − A3 < 800 years). (5) A1, A2 and A3 cannot be bulk datings. (6) The acceptable dating errors for A1, A2 and A3 must be <200 years (2σ).For the decadal- or better-resolved subset, each record must have at least one dating point per full century. Prior to computing centennial anomalies, all proxy records with irregularly spaced time steps were converted into time series with annually spaced time steps using simple linear interpolation, then smoothed by a cubic smoothing spline having a 50% frequency response at 100 years. The smoothed proxy series are transformed to standard normal deviates with respect to the 1000–1899 baseline period. From the resulting centennial anomalies every 25th value is extracted from each century with at least 85 (interpolated) values. From those proxy records spanning the complete 800–1999 period, we obtain 45 centennial anomalies. Of these 45 centennial anomalies the 12 representing the hundred-year periods 800–899, 900–999, 1000–1099, 1100–1199, …, 1900–1999, are those presented in the majority of figures throughout this Letter (Extended Data Fig. 8b). However, all 45 centennial anomalies are used in the correlation experiments as well as in Fig. 2b and c. For the subset including only hydroclimate records decadally resolved or better, we obtained decadal anomalies using the same method as described above but instead using a spline with a 50% frequency response at 10 years. Because individual proxy records can contain noise, a more robust signal is presumably obtained by producing an average of multiple proxies from within a geographic area where hydroclimate can be assumed to be coherent. Unfortunately, the size of such geographic areas, on centennial timescales, is not well constrained owing to the limited length and distribution of instrumental observations. This deficiency required the development of a measure of correlation decay length (CDL)31 of hydroclimate32, 33 on centennial timescales. Encouraged by the agreement found between instrumental precipitation data on annual and decadal timescales34 and simulated values from the ECHO-G model35 on the same timescales, we used the simulated centennial values from the ECHO-G model to calculate a conservative estimate of the spatial auto-correlation function for hydroclimate on centennial timescales by the following equation: where t is the time index, i and j are the longitude index and the latitude index of the grid, respectively; f(t, i, j) are the deviations from the local long-term mean at time t and at a grid-cell (i, j) divided by the local temporal standard deviation (that is, standardized to one standard deviation), d is the spatial separation between grid cells (i, j) and (k, l). The sum of the product f(t, i, j)f(t, k, l) over time and space is restricted to the grid cells (k, l) that are separated from the grid cell (i, j) by the distance between L − ΔL and L + ΔL, where ΔL is 100 km. This sum is divided by the number of occurrences of grid-cell pairs. The outer sum is calculated over all grid cells located on an index circle j, and then this sum is divided by that number of grid cells. It is thus assumed that the spatial autocorrelation is isotropic and zonally homogeneous, that is, it depends only on length L and on latitude j. The assumption of isotropy and zonal homogeneity is an assumption that is not completely fulfilled for all terrestrial locations. However, this approach yields an approximate and average estimation of the decorrelation length at multi-decadal to centennial timescales, and this estimation amounts to a few hundred kilometres (Extended Data Fig. 1a). The precise value of the decorrelation length does not affect the basic features of the reconstruction. A simplified CDL function of hydroclimate variability on centennial timescales was derived from the output of the climate model ECHO-G35 over the period 1000–1990 to calculate the maximum search distances as a function of latitude in this study (Extended Data Fig. 1b). We use the distance function described above, derived from the estimated centennial hydroclimate CDL, to define the search radius for averaging proxy anomalies (Extended Data Fig. 1a and b). Finding the decorrelation of hydroclimate increases with distance from a search centre, we assign a weight for each proxy record, for any given latitude, decreasing from 1 at the centre to e−2 ≈ 0.14 at the search periphery using the following Gaussian weight function: Here, w is the weight assigned to a proxy value located at x distance from a grid node, and R is the search radius defined by equation (1). Every reconstructed grid cell location is required to have three or more proxy centennial anomalies within the search radius. This requirement effectively reduces the total number of proxies used in the experiment, although some regions remain densely covered (such as North America, China and Europe; Extended Data Fig. 8a). Weighted means are computed for all 45 centennial anomalies and are those used in the correlation analyses. All gridding and weighting of proxy (and model) data are performed using the method described above for producing the proxy local weighted averages, although the search area centres are moved to each intersection of a 1° × 1° longitude by latitude grid superimposed over a polar projection of the Northern Hemisphere. Maps of centennial gridded proxy hydroclimate variability are shown in Fig. 2a, and time series corresponding to the fraction of land area at or above a given wetness or dryness threshold are shown in Fig. 2b. The weighted and gridded proxy temperature reconstruction shown in Extended Data Fig. 6a and used in Fig. 3 is produced in the same manner as the hydroclimate reconstructions, with the exception of the CDL function used for weighting and gridding. Detailed descriptions of the CDL, the proxy weighting function, and the search distance function used for reconstructing Northern Hemisphere centennial temperature variations are given in ref. 15. We have used simulations from six coupled atmosphere–ocean general circulation model simulations that are part of the Climate Model Intercomparison project Phase 5 (CMIP5)36. The simulations denoted as ‘past1000’ cover the period 850–1850, whereas the ‘historical’ simulations cover the period 1851–2005. The ‘past1000’ simulations are sometimes performed using a lower resolution or simplified versions of the more comprehensive models used for the ‘historical’ simulations. We have restricted our analysis to those six simulations—bcc-csm1-137, CCSM438, IPSL-CM5A-LR39, GISS-E2R40, HadCM341, and MPI-ESM-P42—conducted with the same model versions for both periods and stitch the ‘past1000’ and ‘historical’ simulations that carry the same ensemble run-initialisation-physics (rip) identifier in the CMIP5 repository (Table 1). However, the simulation conducted with the MIROC-ESM model was not used owing to a documented, unrealistic, long-term positive temperature trend in global mean annual near-surface temperature over the past millennium43. The models were driven by estimations of external forcing, including variations in the orbital changes, total solar irradiance, volcanism, atmospheric trace gases and land-use changes. The CMIP5 Consortium recommended similar, but somewhat different, options for these external forcings23, 44 and the modelling groups did not always use the same options for all CMIP5 simulations. From each of the six simulations we extracted the annual mean precipitation and temperature at the model grid-cell location nearest to our proxy locations. Once the selection was made, an equal length extraction was performed to make all comparisons between model and proxy as equitable as possible. The smoothing, weighted averaging and gridding of all model data followed the same protocol as described above for the proxy data. The centennial forcing anomalies in Fig. 3 are produced from time series of solar45, volcanic46 and CO forcing47 the same or similar to those used in the models. All temporal correlations between two weighted and gridded data sets are calculated by Pearson product-moment correlation coefficient48. Assessment of the statistical significance of these correlations is complicated by the fact that any two data sets are both temporally and spatially correlated. We have taken temporal correlation, that is, autocorrelation, into account by using the method of blocked bootstrap (3,001 repetitions)49 to provide a more realistic estimate of the correlation’s statistical significance. In Extended Data Fig. 4a we show the correlations between the gridded hydroclimate proxy data and corresponding gridded hydroclimate model data and Extended Data Fig. 4b shows the statistical significance of the correlations. In Extended Data Fig. 4c we show the correlations between the gridded hydroclimate proxy data and gridded temperature proxy data and Extended Data Fig. 4d shows the statistical significance of the correlations. We produce histograms of all cross-correlations in this study (Extended Data Fig. 9a–d) and their Fisher z transform50 (Extended Data Fig. 9e–h) to assess the normality of their distributions. A slightly bimodal distribution is seen, for both the original and Fisher z-transform correlations, between the gridded hydroclimate proxy data and gridded temperature proxy data. This is not what we would expect to see if hydrological and temperature anomalies were unrelated: the distribution of the Fisher z transform of the correlations should then be approximately normal and, in particular, unimodal. Instead, the histograms suggest that the distributions of the correlations between proxy hydroclimate and proxy temperature values are a mixture of two unimodal distributions, corresponding to two distinct mechanisms relating hydroclimate and temperature. To assess the robustness of our gridded weighted hydroclimate reconstruction we have produced a number of ‘subset’ reconstructions, using different data screening schemes, as well as comparing our multi-proxy reconstruction to other hydroclimate products. We created a subset of proxies with only those records that have decadal to annual resolution and at least one dating point per century. The gridded reconstruction obtained from this subset has less spatial coverage than the reconstruction from the full data set. The median Pearson product-moment correlation coefficient, at grid point level, between the two reconstructions (using 45 centennial values, lagged by 25 years) is r = 0.97 (p < 0.01), suggesting that any potential dating uncertainties that exist in the full data set have a small influence on our overall results (Extended Data Table 1). Furthermore, we correlated the full reconstruction with reconstructions produced excluding one of the seven proxy types at the time, which notably also involves excluding the numerous but less well age-constrained lake sediment records (Extended Data Table 1). Moreover, we produced similar tests excluding/retaining proxy records reflecting quantitative estimates of precipitation as well as excluding/retaining proxy records with annual signal, summer signal, and winter signal, respectively. Finally, we correlated our reconstruction from the full data set as well as the reconstruction from the more strictly screened data set resolved decadally or better, set against the North American Drought Atlas20 and the Monsoon Asia Drought Atlas21. At grid point level the median Pearson’s correlation is r = 0.34 (p < 0.15) and r = 0.50 (p < 0.03), respectively. These significant, but not very high, correlation values are acceptable considering our hydroclimate reconstruction metric is not an equivalent measure of the PDSI51, 52 and furthermore that the PDSI reconstructions are for the summer season only. The Earth’s climate system contains variability from sub-daily to Milankovitch to deep geological timescales17, 53, 54, 55. It has been known for a long time that proxy records may record climatic variation occurring on different wavelengths (that is, interannual to decadal to centennial to millennial) with varying fidelity resulting in disproportionate manifestations of the variance across these different wavelengths56, 57, 58, 59, 60, 61, 62. Such ‘spectral biases’ are a type of noise that, if not considered or corrected, potentially challenge a precise quantitative assessment of the continuum of climate variability. Using instrumental records to characterize and subsequently correct the possible spectral biases in proxy records should yield more accurate reconstructions of climate. However, we note that this endeavour to characterize proxy biases is questionable, because the true long-term characteristics of the climate system remain broadly unknown, which is part of the motivation to develop high-quality proxy data and reconstructions. Knowledge of proxy data sets is, however, sufficiently advanced to state that individual proxy types may be characterized by one or more biases. For example, tree-ring data, arguably the most intensively investigated proxy archive for the past millennium in both the temporal and frequency domains, are known to have two potentially opposing biases. On the one hand, owing to a ‘biological memory’, tree-ring width data especially contain somewhat less high-frequency (interannual) variability than instrumental temperature (or precipitation) data59, 60, 61, 62. On the other hand, the necessary removal of the long-term age/size-related trends in tree-ring records, if performed using certain techniques, results in the so-called ‘segment length curse’63 that limits our ability to preserve trends/variability exceeding the length of the mean length of the individual tree-ring sequences. In this case, diminished low-frequency (centennial to millennial) variability would be expected. There are, however, different techniques that can be applied to overcome or mitigate such spectral biases. The removal of the age-related trends using a single population estimate as in the Regional Curve Standardisation technique has been shown to break the ‘segment length curse’ and to preserve long wavelengths better63, 64, 65. Similarly, the biological memory can be estimated and removed by autoregressive modelling and the resulting time series can be assigned the spectral properties expected from the instrumental data sets56. Analyses of long-term instrumental data sets are limited to the past couple of centuries, which does not, as also mentioned in the main text, allow a full validation of the low-frequency (for example, centennial to millennial timescale) components of proxy data or reconstructions. In most proxy records, just as with the tree-ring data, the low-frequency variability can be affected by nonlinear processes such as the compression or mixing of layers, a time-varying temperature–precipitation relationship, anthropogenic impacts on the local environment and, in the case of ice-core data, ice-flow movements in the ice cap. It should be noted that the interpolation of non-annual records to annual resolution also results in a spectral bias towards lower frequencies66. Given the uncertainties back in time in many individual proxy records, both in the high- and low-frequency signal, a multi-proxy approach, using the average signal from a number of disparate proxy records, is arguably best suited for the study of low-frequency climate variability. In multi-proxy reconstructions, where the aim is to obtain both the high (annual) and low (centennial) frequency signal, any spectral bias in the proxies, influencing the ratio of the high- and low-frequency signal, is a problem of major importance. Our aim, however, is only to obtain the signal on centennial and longer timescales. By standardizing centennial mean values, proxy records that have both strong and weak low-frequency signals are transformed to records of just centennial-scale information of the same amplitude. This transformation essentially eliminates the influence of spectral biases by making irrelevant whatever prior ratio of high- and low-frequency information had existed in the proxy before standardization. Even if a proxy does not capture the full amplitude of centennial-scale variability (for example, tree-ring width chronologies built by many short individual series and/or limited replication63) the impact is small because the standardization process gives all proxies the same amplitude of centennial-scale variability. As long as the proxies have some centennial-scale variability (which is the only variability we retain for our primary reconstruction) the proxies are useful to us. Our different sensitivity tests (Extended Data Table 1), removing various proxy types from the complete data set and correlating their resulting reconstruction with the full reconstruction, demonstrate a very limited impact of potential spectral biases from particular proxy types on our full reconstruction. These experiments produce remarkably similar hydroclimate reconstructions, suggesting that the purported negative effect of ‘mixing’ proxies with potential spectral biases is not large (if not altogether mitigated by the standardization). Especially important in this regard are results from those tests in which all the records with less than decadal resolution, with only centennial dating control, and those in which all the tree-ring records and all the lake sediment records are excluded, respectively. Not surprisingly, the reduction in proxy coverage gave rise to some spatially local differences; however, the overall patterns remained the same. Most of the changes seen are less likely to be attributable to spectral biases in the proxies, but rather to a diminishment in the considered number of proxies, and hence greater noise in the reconstruction. We mitigate the impact of proxy noise in our reconstruction approach by requiring multiple records within the search radius for every grid point reconstructed. All the proxy data and programs used to perform the experiments herein have been made publicly available at NOAA Paleoclimatology/World Data Center for Paleoclimatology ( The Source Data we used to plot each figure in the Letter are also stored there. The suggested computing environment and dependencies are: Mac OSX 10.6 or greater, Generic Mapping Tools (GMT) 4.5.6 or later67, Ghostscript 9.10, and Matlab2007b or later. A detailed description of the software used, with example data and runtime commands is supplied with the archived data. The climate model data included in the simulations of the Climate Model Intercomparison Project Version 5 (CMIP5) can be downloaded from any of the nodes of the Earth System Grid Federation (registration is required and a password is provided after registration), for example from: The menu on the left side of the page allows the user to select the ‘experiment’ (in our case, ‘past1000’ and ‘historical’), the model (for example, MPI-ESM-P), the variable (pr = precipitation), and the time frequency (mon = monthly). A click on the search button displays all matching results. The data download can be performed interactively by clicking on the links of the individual files or by downloading a Unix script ‘wget’, which can be locally run on a Unix computer. All files are written in ‘netcdf’ format, which also includes the metadata information, and that can be accessed with a variety of software tools, for instance with the public statistical packages R, Climate Data Operators (CDO) or netcdf Operators (NCO). Data of the ECHO-G simulation can be retrieved from the CERA data bank of the German Climate Computing Centre (DKRZ) at An account on the CERA data bank is required to access the data and can be obtained by sending an email to CERA will then provide a personal account and password as soon as possible. Once an account has been created, the data of the ECHO-G simulation can be accessed.

News Article | December 9, 2015

The CMIP5 models29 analysed here are listed in Extended Data Table 1, with corresponding references. We use one ensemble member (r1i1p1) from 25 models that had available monthly output of atmospheric temperature and humidity, precipitation, and energy fluxes for the pre-industrial control (piControl) and abrupt quadrupled CO (abrupt4 × CO ) experiments at the time of analysis. The Gregory method10, 20 is employed using each model’s piControl and abrupt4 × CO run. For each year of the abrupt4 × CO run, the global-mean abrupt4 × CO anomaly of a physical quantity (for example, L P) relative to the piControl simulation is paired against the corresponding anomaly of 2-m air temperature, generating a scatterplot (Extended Data Fig. 1). To compute the annual anomalies, the piControl 21-year mean, centred on the corresponding year of the abrupt4 × CO simulation, is subtracted from each abrupt4 × CO 1-year mean. Subtracting this running mean removes possible influences of climate model drift on the anomalies. The scatterplots are generated using 150 years of the abrupt4 × CO simulation when available (140 years are available from the CNRM-CM-2 and IPSL-CM5A-MR models). A least-squares linear regression is then applied to each scatterplot, with the slope and y-intercept of the fit representing the temperature-mediated and rapid responses to CO forcing, respectively. The Gregory methodology is displayed visually for the GFDL-CM3 model in Extended Data Fig. 1. It shows the yearly evolution of globally averaged LWC, SWA, SH, and L P changes after a quadrupling of atmospheric CO . The physical interpretation of these changes has been thoroughly discussed in numerous studies6, 12, 16, 18, 30, 31, 32. The high degree of linearity of the scatterplots demonstrates the reliability of this approach for separating temperature-mediated responses and rapid adjustments. As shown in Fig. 2, model spread in the temperature-mediated SWA response, dSWA/dT, particularly for clear-sky, substantially contributes to the spread in L dP/dT. The total change in L P per unit warming (L ΔP/ΔT, which includes the temperature-mediated response and rapid adjustment), however, is not correlated with the corresponding total change in SWA across models (Extended Data Fig. 3a). This may be the consequence of large scatter in the rapid adjustment of L P, which is not strongly related to that of SWA (Extended Data Fig. 3b). Reference 7 found the opposite result using eight CMIP3 models: that the temperature-mediated responses of latent heating and solar absorption are not correlated, but that total changes at a warming of 2 K are anti-correlated (r = −0.66). The differences between the results in ref. 7 and ours probably originates from different analysis methods and sampling. For example, in ref. 7 only 8 models from CMIP3 were used, whereas we use 25 from CMIP5. Additionally, ref. 7 combined different types of CO forcing scenarios (for example, 1% increase to CO doubling or quadrupling and instantaneous CO doubling) and based their temperature-mediated estimates on scatterplots of relatively few (5–10) data points from each simulation. It is shown in their analysis that quite different slopes may be obtained from different forcing simulations even with the same model. We argue that our results are more robust. In addition to using many more models, we base our temperature-mediated estimates on a large number of data points (140–150) from a single forcing scenario (instantaneous CO quadrupling). Furthermore, our findings are consistent with results reported in ref. 4, where it was also found that total changes in precipitation (per unit warming) are not correlated with those in shortwave absorption under doubled CO using a larger sample of CMIP3 models (14) than used in ref. 7. Radiative kernels were developed using an offline version of the MPI-ECHAM5 radiation code and represent the sensitivity of top-of-atmosphere (TOA) and surface shortwave radiative fluxes to small perturbations in atmospheric specific humidity and surface albedo14. Global-mean temperature-mediated SWA responses due to water vapour changes are computed as follows: (1) temperature-mediated responses in the logarithm of specific humidity at all months, locations and pressure levels are multiplied by the shortwave specific humidity atmospheric (TOA minus surface) kernel for clear-sky. Global-annual-mean kernel values are shown in Extended Data Fig. 4a. Specific humidity responses are computed as the difference in abrupt4 × CO —piControl anomalies averaged over years 121–150 and 1–30 of the abrupt4 × CO simulation, normalized by the corresponding difference in 2-m air temperature. (2) The product is integrated over the depth of the troposphere (defined as all levels between the surface and a tropopause height that varies linearly with latitude from 100 hPa on the Equator to 300 hPa at the poles), and then averaged over all months and locations. Temperature-mediated SWA responses due to surface albedo are computed in similar fashion but with surface albedo kernels and temperature-mediated responses. The final kernel-derived temperature-mediated SWA response for each model (dSWA/dT(k) in Fig. 2a) is the sum of the water vapour and surface albedo components of the response, with the vapour component dominating14. It should be noted that the computation of atmospheric radiative feedbacks using kernels is associated with uncertainty12, 14. In this study, our goal is not to obtain perfectly accurate quantitative estimates of temperature-mediated responses with the kernels. Rather, we apply the kernels to assess whether model spread in dSWA/dT originates from differences in the response of atmospheric water vapour to surface warming. We argue that the kernel calculations are sufficiently accurate for this purpose. Extended Data Fig. 4 shows that kernel-derived dSWA/dT is significantly correlated across models with globally averaged responses of water vapour, as expected. By contrast, actual model-produced dSWA/dT is not correlated with the water vapour responses, supporting our finding that spread in dSWA/dT is not explained by differences in the water vapour response. This may be partly because water vapour responses (in a fractional sense) and their intermodel variability are largest in the upper troposphere, where water vapour changes have little effect on total-column SWA (Extended Data Fig. 4a). Our conclusions are also supported by ref. 12, where it is independently shown that CMIP5 spread in enhanced solar absorption under CO forcing cannot be reproduced by imposing simulated changes in atmospheric temperature and moisture on the single-column Fu-Liou radiative transfer model. Nonetheless, to assess uncertainty in kernel-estimated temperature-mediated responses, calculations with additional kernels produced from different radiation codes would be necessary. SWA sensitivity to water vapour variability is computed on the basis of spatial and temporal variations of PW in the control climate. All grid cells and months from 150 years of a model’s piControl simulation over the tropical oceans are aggregated into one sample to compute the sensitivity. The years 2001–2009 and 1984–2009 are used for estimates based on the CERES-EBAF (Clouds and the Earth’s Radiant Energy System Energy Balance and Filled) and ISCCP-FD (International Satellite Cloud Climatology Project flux data set) products, respectively. Tropical oceans are defined as grid cells with centres between 30° S and 30° N and with land fraction less than 0.50. All model output is regridded to 2.5°  ×  2.5° latitude–longitude before performing calculations. To compute the sensitivity, the clear-sky SWA at each grid cell and for each month is normalized by incoming solar radiation, then binned according to PW with equal bin size of 2 kg m−2. SWA is averaged within each PW bin and plotted against the bin centre value (Fig. 3a). Only bins with at least 20 data values in every model and observational source are considered, resulting in a common PW range of 12–58 kg m−2. The linear regression slope of the SWA versus PW scatterplot represents the SWA sensitivity to varying PW (that is, dSWA/dPW). Statistical uncertainty in dSWA/dPW is computed as the 95% confidence interval (CI) of the regression slope, derived from the estimated standard error of the slope parameter33. We consider only tropical oceans when computing dSWA/dPW owing to the relatively small variability of surface albedo and solar zenith angle within this region, better isolating the effect of PW on SWA. To examine whether the remaining small variations of surface albedo or solar zenith angle affect dSWA/dPW, we re-compute dSWA/dPW by also conditioning on surface albedo and/or zenith angle. Specifically, dSWA/dPW is recomputed from locations and months where albedo and/or zenith angle vary by no more than 0.01 or 1°, respectively. While the recomputed dSWA/dPW values vary slightly from the original values computed with the entire domain, they are highly correlated with the original values across models (not shown). This suggests that the influence of albedo and zenith angle on dSWA/dPW is very small. Our computation of dSWA/dPW does not isolate the effect of aerosols on SWA. Models differ in the types of aerosols represented, the concentrations and optical properties of the aerosols, and the quantitative parameterization of aerosol scattering and absorption. These factors could potentially modulate the relationship between SWA and PW in models, and thus dSWA/dPW may not completely reflect the physics of water vapour absorption. One possible example of this is with the GFDL models. All three GFDL models (GFDL-CM3, GFDL-ESM2G, GFDL-ESM2M) use the same shortwave parameterization for water vapour absorption and scattering by aerosols34, 35, 36, 37; yet the CM3 absorbs more solar radiation in moist conditions, leading to larger dSWA/dPW (Extended Data Fig. 5). The CM3 implements an interactive aerosol scheme with different aerosol optical properties from the ESM models (which prescribe aerosols), resulting in enhanced and more realistic downward clear-sky surface shortwave flux due to reduced aerosol direct effects34. We speculate that the reduced aerosol direct effects may lead to more solar radiation available for absorption by lower tropospheric water vapour, and thus larger SWA in moist conditions in the CM3. The extent that differences in aerosol concentrations/properties affect the variability in dSWA/dPW among other models is not known without a more rigorous and controlled investigation. However, we suspect these effects are generally small, as (1) the GFDL models represent the only instance in which dSWA/dPW substantially differs among models with similar parameterizations of solar absorption by water vapour (Fig. 4, Extended Data Fig. 6), and (2) dSWA/dPW computed using different forcing scenarios (for example, historical and RCP8.5, which potentially exhibit large variability in aerosols) are very similar to those computed with the piControl (not shown). Stratospheric ozone is another strong absorber of solar radiation. Ozone may influence dSWA/dPW if its concentration varies systematically with atmospheric moisture. Although column-integrated ozone tends to decrease with increasing PW in 16 models that have available ozone output, there is no cross-model correlation between the sensitivity of ozone concentration to varying PW and dSWA/dPW (not shown). This suggests the covariability of ozone and PW does not systematically affect dSWA/dPW. Furthermore, dSWA/dPW is similar to that in Extended Data Fig. 5 when it is computed from a smaller sample of locations and months exhibiting little variability in column-integrated ozone (not shown). This strengthens the case that ozone is not significantly affecting model variability in dSWA/dPW. However, the parameterization of ozone absorption probably influences the mean position of the SWA versus PW curve for each model (that is, average SWA over the range of PW analysed). One possible example of this is with the INM-CM4 and MRI-CGCM3 models. Both models use the same shortwave parameterization for water vapour absorption38, but have different treatments of solar absorption by ozone39, 40. We speculate that this partly explains the similar slopes but different vertical placement of the SWA versus PW curves for these models (Extended Data Fig. 5). As discussed above, dSWA/dPW values are nearly invariant to the simulation (for example, piControl, abrupt4 × CO , historical, RCP8.5) from which they are calculated in models (not shown). They are also very similar when computed from a subset of years as short as that used from observations (2001–2009). Furthermore, dSWA/dPW computed from various percentiles of the SWA distribution within each PW bin (ranging from the 10th to 90th percentile) are similar to those computed with the SWA bin mean (not shown). These findings further demonstrate that the methodology robustly quantifies the dependence of SWA on atmospheric moisture. The CERES-EBAF data set provides clear-sky radiative fluxes at the TOA and surface on a grid comparable to climate models and has global coverage. TOA fluxes are based on satellite measurements, and surface fluxes are generated with a radiative transfer model and are constrained by the TOA fluxes22. Although the surface fluxes are model produced, they are computed with a radiative transfer algorithm that is arguably more advanced and physically based than that used in most climate models. In particular, the radiation code for solar absorption employs the formal correlated-k-distribution framework with absorption coefficients (k values) being determined directly from detailed line-by-line (LBL)-generated k distributions41, 42. This approach is arguably superior to that in most CMIP5 models, in which k values are in many cases determined with non-physical mathematical optimization procedures (for example, refs 36 and 43). In addition, the treatment of pressure–temperature–concentration dependence of k values in the CERES-EBAF scheme is more physical and higher in resolution than most CMIP5 models41. The final parameterization describing water vapour absorption in CERES-EBAF also has many mathematical terms (>50) approximating shortwave transmission44. CERES-EBAF surface fluxes are in good agreement with point observations22 and the radiation scheme used to generate surface fluxes performs well when compared with recent LBL calculations24. To compute CERES-EBAF-derived dSWA/dPW, water vapour data are taken from three sources: (1) the Special Sensor Microwave Imager (SSM/I)45; (2) a product developed by Remote Sensing Systems (RSS) that combines measurements from various instruments, including SSM/I, the Special Sensor Microwave Imager Sounder (SSMIS), the Advanced Microwave Scanning Radiometer (AMSR-E), and the WindSat Polarimetric Radiometer46; and (3) the Television Infrared Observation Satellite Operational Vertical Sounder (TOVS)47. ISCCP-FD-derived dSWA/dPW is based on TOVS water vapour, which was used in the development of the flux data set27. The uncertainty range of CERES-EBAF-derived dSWA/dPW shown in the figures in this Letter (for example, Figs 3 and 4) reflects statistical uncertainty in the computation of dSWA/dPW and uncertainty due to the use of different PW data sets (see Fig. 3 legend). Inherent uncertainties in CERES-EBAF fluxes, including possible measurement uncertainty, errors in the radiative transfer scheme, and uncertainty in the methodology that generates clear-sky fluxes22, are difficult to quantify and are not included. Potential uncertainty in the individual PW measurements are also not accounted for. Thus, the effective uncertainty of observed dSWA/dPW is probably larger than indicated in the figures. Nonetheless, the conclusion that most CMIP5 models underestimate dSWA/dPW is robust, as it is strongly supported by a recent study in which radiation schemes are evaluated against high quality LBL calculations25. The severe underestimation of dSWA/dPW by the GISS models stands out in Fig. 4. In these models, solar absorption by water vapour is parameterized with a pseudo-k-distribution approach consisting of 15 mathematical terms48, 49, 50, 51, 52. Some of the other CMIP5 models in our analysis, including those with the largest dSWA/dPW, use as many terms in their parameterizations (Fig. 4). Thus the poor performance of the GISS parameterization is not simply the result of the number of computations employed to approximate shortwave transmission. Rather, the finer details of how the parameters of the analytical expressions (for example, pseudo absorption coefficients and weights for terms) are developed and the quality of the reference calculations from which the parameterizations were originally based, are probably important, among other characteristics. The GISS parameterization employed in CMIP5 was developed from a combination of old and relatively new methods50, 51, 52. The resulting analytical expressions, which combine pressure–temperature–spectral absorption dependency, are known to underestimate solar absorption in moist atmospheres based on comparison with modern LBL calculations24, 25. Updates were made to the GISS radiation scheme since CMIP5, and these will probably result in significant improvements in solar absorption and dSWA/dPW in future generations of the GISS model25. Apart from the GISS models, the number of mathematical terms employed in shortwave parameterizations appear to exert a general influence on parameterization performance (Fig. 4). A specific example is with models developed at IPSL and CNRM. In these models, water vapour absorption is parameterized with an algorithm originally developed in 1980 and later modified for use in the operational European Center for Medium Range Weather Forecasts (ECMWF) model53, 54. It consists of a few shortwave bands, within which Padé Approximants represent gaseous absorption by water vapour53. In the IPSL models, only two shortwave bands are used54. These models clearly underestimate mean SWA over the range of PW analysed and dSWA/dPW is smaller than that of all models except GISS (Extended Data Fig. 5). By contrast, the CNRM scheme employs a total of 6 shortwave bands28. Mean SWA is considerably larger and more realistic in these models compared to IPSL and dSWA/dPW is marginally improved as well (Extended Data Fig. 5). Thus the number of spectral bands and corresponding computations can have a large impact on the realism of solar absorption. A comparison of the models that implement a 7-band parameterization originally developed in ref. 38 (BCC-CSM1.1, BCC-CSM1.1(m), CCSM4, INM-CM4, MRI-CGCM3, NorESM1-M) sheds light on specific characteristics of parameterizations of gaseous absorption, other than number of mathematical terms, that appear important for SWA. The original parameterization for water vapour absorption in ref. 38 consists of a 7-term pseudo-k-distribution summation with absorption coefficients and weights determined by fits to empirical and LBL calculations, respectively38, 50. It is employed by the INM-CM4 and MRI-CGCM3 models (Extended Data Fig. 6). The parameterization was later modified to account for additional near-infrared water vapour absorption based on updated spectroscopic data and continuum absorption in the shortwave, which resulted in refitting the 7 parameterized absorption coefficients55. The updated parameterization is employed in the BCC-CSM1.1, BCC-CSM1.1(m), CCSM4, and NorESM1-M models. The models using the updated parameterization exhibit improved dSWA/dPW by a small but non-negligible amount (Extended Data Fig. 6). Consideration of weak water vapour absorption lines and continuum absorption are therefore somewhat important for accurate simulation of changes in solar absorption in a warming climate. This is consistent with the findings presented in refs 55 and 56. Note that most models with larger and more realistic dSWA/dPW tend to account for continuum absorption in their parameterizations (Extended Data Fig. 6). Other details of the CMIP5 shortwave parameterization schemes, including the treatment of scattering by aerosols and molecules and of overlapping absorption by multiple gaseous species, have not been thoroughly investigated here. They too may influence the intermodel spread in dSWA/dPW. Even if these details are indeed influencing dSWA/dPW, it would not change the conclusion that shortwave parameterizations in general are important for the spread in simulated hydrologic cycle intensification. We exploit the strong model relationships among dSWA/dPW, dSWA/dT, and L dP/dT (Extended Data Fig. 7a, b) to compute a hypothetical change in L P that may occur at the end of the twenty-first century under realistic climate forcing if the true temperature-mediated SWA response to CO forcing, dSWA/dT, were perfectly known. The ‘true’ dSWA/dT is approximated from the model relationship between dSWA/dT and dSWA/dPW, using the observed value of dSWA/dPW based on CERES-EBAF (Extended Data Fig. 7a). The resulting true value of dSWA/dT is then used with the model relationship between the temperature-mediated L P response, L dP/dT, and dSWA/dT to estimate a ‘bias’ in L dP/dT that originates from a bias in dSWA/dT (Extended Data Fig. 7b). The bias for each model is then removed from the predicted precipitation change at the end of the twenty-first century in the RCP8.5 scenario relative to the piControl according to: where is the constrained total change in L P normalized by surface warming with the bias removed, L ΔP is the total late twenty-first century L P change (mean of years 2081–2100 in RCP8.5 minus mean of years 131–150 in piControl), and ΔT is the late twenty-first century 2-m air temperature change computed similarly to L ΔP . The above procedure makes several assumptions, including: (1) the middle of the range in CERES-EBAF-computed dSWA/dPW is most representative of the real atmosphere; (2) the best ‘true’ value of dSWA/dT (Extended Data Fig. 7a, blue star) and L dP/dT (Extended Data Fig. 7b, black horizontal line) occurs at the linear regression line on the cross-model scatterplots; and (3) L dP/dT contributes linearly (with ΔΤ) to the total late twenty-first century change in L P computed from RCP8.5, as depicted in equation (2). Removing the bias in L dP/dT due to a bias in dSWA/dT reduces the model spread in predicted precipitation change per unit warming at the end of the twenty-first century by 37%, and reduces the ensemble mean increase by 38% (Extended Data Fig. 7c). Even if we do not normalize by differences in surface warming ΔT, which is the main driver of the spread in total precipitation change L ΔP, a discernible reduction in spread by 27% and ensemble mean increase by 25% can be achieved (Extended Data Fig. 7d). The spread reduction is substantial, considering the numerous factors in addition to the temperature-mediated SWA response to CO forcing potentially contributing to model scatter in RCP8.5 projections. These include temperature-mediated responses of other energy budget components (Extended Data Fig. 2), greenhouse-gas forcing other than CO , aerosols, and the rapid L P adjustments to all forcings. For instance, ref. 4 demonstrated the potent role of black carbon forcing for CMIP3 spread in simulated global precipitation change under a realistic climate change scenario. That we obtain a 37% reduction in L ΔP/ΔT under RCP8.5 by only constraining the temperature-mediated component of SWA change under pure CO forcing (and only a somewhat larger reduction by 45% when repeating the same exercise with the quadrupled CO runs, not shown) suggests that the role of black carbon forcing on the spread may be less potent in CMIP5 than CMIP3. This is an interesting possibility worthy of further analysis. How may uncertainty in late twenty-first century precipitation change be reduced further? As discussed above, a realistic climate change scenario includes greenhouse-gas forcing (from CO and other gases), aerosol forcing, and rapid adjustments to these forcings. A better understanding of all these factors is therefore critical, including understanding of the rapid adjustment to CO forcing. This factor has a non-negligible spread (Extended Data Fig. 3b) and is not strongly correlated with the corresponding intermodel variations in temperature-mediated response (r = −0.23, not shown). Additionally, the spread in the temperature-mediated L P response to CO forcing is not only driven by the SWA component, as indicated by residual scatter in Fig. 2b. Extended Data Fig. 2 shows that the net atmospheric longwave cooling response, dLWC/dT, also has a large spread that is correlated with L dP/dT. dSWA/dT and dLWC/dT are not correlated with each other (|r| < 0.1, not shown), suggesting that dLWC/dT is another independent source of spread that demands better understanding. dLWC/dT is only correlated with L dP/dT for all-sky (Extended Data Fig. 2), implying that clouds may play an important role in the intermodel relationship. This is different from the case of dSWA/dT, in which clear-sky absorption by water vapour is critical. No statistical methods were used to predetermine sample size. Any codes used in the analysis in this paper and in the production of figures can be made available upon request. Please contact A.M.D. (

Dupont J.C.,IPSL | Haeffelin M.,IPSL | Stolaki S.,LMD | Elias T.,HYGEOS
Pure and Applied Geophysics | Year: 2016

The data from suite of in situ sensors, passive and active remote sensing instruments dedicated to document simultaneously radiative and thermo-dynamical processes driving the fog life cycle at the SIRTA Observatory (instrumented site for atmospheric remote sensing research) near Paris during two periods of 6 months are analysed. The study focuses on the analysis of the relative role of key physical processes and their interactions during fog formation, development and dissipation phases. This work presents, from analysis of detailed observations, the range of values that critical parameters have to take for fog and quasi-fog formation. In our study, we consider fog (horizontal visibility lower than 1 km, a dataset of 300 h) and quasi-fog (horizontal visibility ranging from 1 to 2 km, a dataset of 400 h) events induced by radiative cooling (53 events) and stratus lowering (64 events). For the radiative fog events, (with radiative cooling during prefog conditions), we note that the longwave net radiative flux (around −60 ± 5 W/m2) induces a cooling of the surface layer. The vertical structure of this cooling is controlled by dynamics, that is, wind shear and horizontal and vertical velocities. In case of very low mixing (wind speed below 0.6 m/s), the thermal stability is very strong with a temperature inversion around 3.5 °C for 10 m and a humidity gradient reaching 10 % preventing vertical development of the fog layer. For stratus-lowering fog events, the altitude of the stratus layer, the vertical mixing and the absolute value of humidity are driving parameters of the fog formation. Our statistical analysis shows that a stratus cloud with a cloud base around 170 m and with a small cloud-base subsidence rate of 50 m/h leads to fog, whereas a stratus cloud with a base around 800 m agl, with a larger cloud-base subsidence rate of 190 m/h conducts to quasi-fog situations with an important increase of the stratus liquid water path. © 2015, Springer Basel.

Robinson L.F.,University of Bristol | Robinson L.F.,Woods Hole Oceanographic Institution | Adkins J.F.,California Institute of Technology | Frank N.,IPSL | And 4 more authors.
Deep-Sea Research Part II: Topical Studies in Oceanography | Year: 2014

Deep-sea corals were discovered over a century ago, but it is only over recent years that focused efforts have been made to explore the history of the oceans using the geochemistry of their skeletal remains. They offer a promising archive of past oceanic environments given their global distribution, layered growth patterns, longevity and preservation as well as our ability to date them using radiometric techniques. This paper provides an overview of the current state-of-the-art in terms of geochemical approaches to using deep-sea coral skeletons to explore the history of the ocean. Deep-sea coral skeletons have a wide array of morphologies (e.g. solitary cup corals, branching colonial corals) and materials (calcite, aragonite and proteins). As such their biomineralization strategies are diverse, leading to complex geochemistry within coral skeletons. Notwithstanding these complications, progress has been made on developing methods for reconstructing the oceanographic environment in the past using trace elements and isotopic methods. Promising approaches within certain coral groups include clumped isotopes and Mg/Li for temperature reconstructions, boron isotopes and radiocarbon for carbon cycling, εNd, and radiocarbon for circulation studies and δ15N, P/Ca and Ba/Ca for nutrient tracer studies. Likewise there is now a range of techniques for dating deep-sea corals skeletons (e.g. U-series, radiocarbon), and determining their growth rates (e.g. radiocarbon and 210Pb). Dating studies on historic coral populations in the Atlantic, Southern Ocean and Pacific point to climate and environmental changes being dominant controls on coral populations over millennial and orbital timescales. This paper provides a review of a range of successes and promising approaches. It also highlights areas in which further research would likely provide new insights into biomineralization, palaeoceanography and distribution of past coral populations. © 2013 Elsevier Ltd.

News Article | October 5, 2016

No statistical methods were used to predetermine sample size. The experiments were not randomized and the investigators were not blinded to allocation during experiments and outcome assessment. To simulate human migration out of Africa and its subsequent global dispersal under time-varying climate conditions we employ a reaction/diffusion model for human density ρ(x,y,t) in each grid point (x,y) and at time t. Similar models, also known as modified Fisher–Kolmogorov or Fisher–Skellam models have been used in previous idealized human dispersal modelling studies31, 32. In our HDM the reaction terms (growth G and mortality M) are dependent on three key vegetation, climate parameters: net primary productivity N(x,y,t), which controls the availability of carbon-based food sources, desert fraction d(x,y,t) and temperature T(x,y,t), which are interpreted here as key climate stressors that determine the mortality of H. sapiens. Furthermore, a quadratic mortality term is included to avoid exponential growth during the simulation. The governing equation for ρ (x,y,t) reads: The growth rate of human density is parameterized as a function of net primary productivity N(x,y,t) in terms of Γ(N) = γ{0.5tanh [(N − N )/N ] + 0.4}. Assuming that the early human diet consisted largely of grazing mammals, this equation captures the notion that the primary productivity in each model grid box is linked to the edible biomass. A net primary productivity N is required in each grid box to maintain human population. We make the assumption that the mortality rate in each grid box is strongly controlled by the desert fraction d and the annual mean surface temperatures T. Climates with temperature below T will abruptly increase human mortality. Furthermore, we include a dependence of M on the desert fraction d. Human mortality will increase rapidly—a result of sparse food and water resources—if the desert fraction increases beyond d . Both factors are parameterized as: M(d,t) = μ max{1 + tanh[(d − d )/d ], 1.67[1 − tanh[(T − T )/T ]]}. The explicit equations used here for growth and mortality differ from the simplified logistic growth law used in previous studies33. For coastal points, we assume an alongshore advection with a mean alongshore velocity U (see Supplementary Table 1). The time evolution of ρ is then calculated from: Accelerated migration along rivers34 is not taken into account. The simulated human dispersal represents a strongly idealized scenario, which does not include competition or assimilation with H. neanderthalensis, H. erectus or Denisovans. To extend the model to a more realistic multi-actor framework would require more detailed information on the respective climate sensitivities and better observational constraints on the initial population densities. Short-distance sea-faring is parameterized as a Gaussian decay away from the coast with a width of 1°. The HDM is discretized using a first order upwind in time and second order central difference in space method. Our numerical implementation uses a 1° × 1° horizontal grid with a time-step of 1 year. The physical model parameters chosen here are listed in Extended Data Table 1. To study the effects of slowly evolving glacial boundary conditions on the climate system and human dispersal we conducted a transient glacial–interglacial climate model simulation with the earth system model LOVECLIM (abbreviated as SIM). The simulation SIM is based on the earth system model LOVECLIM35, version 1.1. The atmospheric component of the coupled model LOVECLIM is ECBilt36, a spectral T21, three-level model, based on quasi-geostrophic equations extended by estimates of ageostrophic terms. The model contains a full hydrological cycle, which is closed over land by a bucket model for soil moisture and a runoff scheme. Diabatic heating due to radiative fluxes, the release of latent heat and the exchange of sensible heat with the surface are parameterized. The ocean-sea ice component of LOVECLIM, CLIO37 consists of a free-surface Ocean General Circulation Model with 3° × 3° resolution coupled to a thermodynamic-dynamic sea ice model. Coupling between atmosphere and ocean is done via the exchange of freshwater and heat fluxes, rather than by virtual salt fluxes. The terrestrial vegetation module of LOVECLIM, VECODE38, computes the annual mean evolution of the vegetation cover based on annual mean values of several climatic variables. Orbital-scale time-evolving ice-sheet boundary conditions in SIM are prescribed by changing ice-sheet orography and surface albedo. The corresponding anomalies were derived from the time-dependent ice-sheet reconstruction obtained from the CLIMBER, 2b earth system model of intermediate complexity39, 40. Also, in LOVECLIM, the vegetation mask is adjusted to reflect time-evolving changes in ice-sheet-covered areas. Time-varying atmospheric greenhouse gas concentrations are prescribed in the model following greenhouse gas measurements from the EPICA DOME C ice core24. Another important forcing considered here is orbitally induced insolation variations that are calculated from the algorithm of ref. 41. We employ an acceleration technique, which compresses the time-varying external boundary conditions by a factor of 5. Instead of running the coupled model for the entire period of 408,000 years42, 43 the model experiment is 81,600 years long, while covering the entire forcing history of the last 408 thousand years. The acceleration technique is based on the assumption of relatively fast equilibration of surface variables to externally driven slow climate change. Through previous experimentation42 we have found that an acceleration factor of 5 is appropriate for the tasks envisioned here. Whereas, the climate model run follows closely the methodology of ref. 9, the current simulation uses a higher climate sensitivity, which amounts to ~4 °C per CO doubling. The result is a more realistic glacial/interglacial amplitude in surface temperatures compared to palaeo proxy data (see below). To validate the model against palaeo proxy data we analyse 140 ka of the SIM model simulation (see below). This model simulation does not include the effects of millennial-scale variability associated with Dansgaard–Oeschger and Heinrich events. This will be added through a secondary model/data-based-procedure (see below). While the full climate model simulation SIM covers 408 thousand years, only the last 125 thousand years are used to force the human dispersal model (see schematic Extended Data Fig. 1). The variables that are used as part of the climate forcing of the HDM are the simulated changes in temperature [T (x,y,t)], net primary productivity [N (x,y,t)] and desert fraction [d (x,y,t)]. To compare the transient LOVECLIM simulation SIM with palaeo proxy data, we conduct an empirical orthogonal function (EOF) analysis of the simulated sea surface temperatures (SST) from 140–0 ka and compare the resulting leading EOF pattern and corresponding principal component with an EOF analysis conducted on 63 palaeo proxy SST proxy reconstructions25, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89. The SST reconstructions25, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89 used for this analysis were required to cover the period from 140–10 ka. The location and the leading EOF pattern and corresponding principal component of these palaeo proxy data are shown in Extended Data Fig. 2. The model simulation reproduces both, the time evolution (Extended Data Fig. 2a) as well as the EOF pattern (Extended Data Fig. 2), in good agreement with the SST proxy data. We find higher EOF loadings for extratropical and some subtropical upwelling regions and somewhat damped EOF values for the tropical oceans. The resulting global mean SST time series which are based on the model and proxy EOF reconstructions (Extended Data Fig. 2c) exhibit a high degree of correlation and a similar magnitude for the transitions from the Last Glacial Maximum (LGM, 21 ka) to the early Holocene. However the magnitude of the Last Interglacial Ocean warming 130–120 ka is somewhat reduced in SIM, compared to the palaeo proxy reconstructions of global SST. There is also a reduction of the precessional (21 thousand year period) signal in the simulation relative to the palaeo data. Note that for this comparison we have made the assumption that annual mean SST in the model can be directly compared with a variety of SST proxies25, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89(such as alkenone, Mg/Ca and assemblage data). This requires that the proxies can be interpreted as an annual mean, which in some regions is not necessarily the case90. Other direct time series comparisons between simulated physical variables with palaeo proxy data are shown in Fig. 1. We find for instance that the magnitude as well as the timing of simulated surface temperatures over Antarctica in SIM matches the ice-core data well (Fig. 1e). Furthermore, the simulated hydroclimate variations in the Levant region bear close resemblance to orbital timescales with palaeo proxy reconstructions from speleothems (Fig. 1c) and lacustrine data (Fig. 1d). To assess potential modelling uncertainties in hydroclimate for the selected time-slice of the LGM further we compare the simulated anomalies of the rainfall ratio between LGM and pre-industrial climate (relative to the pre-industrial values) in SIM with other CGCM LGM experiments conducted as part of the Palaeomodel Intercomparison Project, phase 3 (PMIP3). The results (Extended Data Fig. 3) clearly indicate that in spite of the fact that very similar boundary forcing conditions (greenhouse gas concentrations, ice-sheet topography and albedo and orbital forcing) are used, large modelling uncertainties exist in the simulated LGM rainfall patterns. Focusing on the particularly important human migration corridor of northern Africa, the LOVECLIM SIM experiment agrees well with the simulated drying in the CCSM4 and COSMOS-ASO simulations. In contrast the IPSL-CM5A-LR, GISS-E2-R and MPI-ESM-P models simulate significantly increased glacial rainfall over north Africa. For northern Europe, all CGCMs show an overall drying for glacial conditions. The experimental design of PMIP3 does not include transient model experiments, so it is unclear at this stage, whether any of the other PMIP models would reproduce for example, the recorded hydroclimate variations over the past 125 thousand years in the Levant region (Fig. 1 d) in a qualitatively similar way as LOVECLIM. To account for the effects of millennial-scale Dansgaard–Oeschger (DO) and Heinrich events91, 92 on climate and the resulting changes in human dispersal, we reconstructed the corresponding anomalies of surface temperature (x,y,t), net primary productivity (x,y,t) and desert fraction (x,y,t), (where the subscript m stands for the millennial-scale climate anomalies and the prime symbol denotes the deviation from the long-term mean) using a previously conducted transient LOVECLIM climate model simulation93, (abbreviated as MIL). This simulation is a climate model hindcast of the period 50–30 ka, which includes both orbital scale forcings (ice-sheets, greenhouse gas concentrations and orbital changes) and freshwater-forced millennial-scale climate shifts associated with each observed DO and Heinrich event during this period. For this climate model simulation we calculated the regression patterns P (x,y) between the simulated Iberian Margin SST and the corresponding spatiotemporal fields: To reconstruct the millennial-scale anomalies [ (x,y,t), (x,y,t), (x,y,t)] for the entire HDM period 130–0 ka, we multiply the model-based regression patterns with the observed high-pass-filtered Iberian margin SST variations25 that is, (x,y,t) =  (t) P (x,y); (x,y,t) =  (t) P (x,y); (x,y,t) =  (t) P (x,y). The resulting millennial-scale anomaly maps were then added back to the corresponding fields of the orbital scale SIM simulation [T (x,y,t), N (x,y,t), d (x,y,t)]. The comparison between reconstructed and observed Iberian Margin SST variability shows an excellent agreement, on both the orbital and millennial timescale (Fig. 1b). Furthermore, we find a good agreement on both timescales between reconstructed and simulated hydroclimate variations in the Levant region (Fig. 1d). The full climate forcing (temperature, net primary productivity and desert fraction) of the HDM is provided by adding the directly simulated LOVECLIM variables from SIM T (x,y,t), N (x,y,t), d (x,y,t) to the reconstructed millennial-scale anomalies (x,y,t), (x,y,t), (x,y,t), obtained from model/data-based linear regression reconstruction, which uses model-derived patterns of millennial-scale variability P (x,y), P (x,y), P (x,y) from MIL, and a time series of observed millennial-scale variability from an Iberian Margin sediment proxy SST reconstruction. This yields: T(x,y,t) = T (x,y,t) + b (x,y,t), N(x,y,t) = N (x,y,t) + b (x,y,t), d(x,y,t) = d (x,y,t) + b (x,y,t), where b in the standard HDM simulation is set to 1 (see Extended Data Fig. 1). To further test the effect of DO variability on human dispersal, we also conducted one experiment with b = 0 (Fig. 3c). We also include the time-varying coastline into the HDM climate driving fields T(x,y,t), N(x,y,t), d(x,y,t). The orbital and millennial-scale contributions to temperature, net primary productivity and desert fraction can again be retrieved statistically through an Empirical Orthgonal Function (EOF) analysis of the fields T(x,y,t), N(x,y,t), d(x,y,t). The resulting EOF patterns and principal components are depicted in Extended Data Figs 4, 5, 6. Because of the linear operations conducted, the EOF patterns of millennial-scale variability for the respective variables are very similar to the regression patterns P (x,y), P (x,y), P (x,y) (not shown). To quantify the effects of climate variability and human dispersal processes on the dispersal of H. sapiens during the Late Pleistocene, the HDM is run in various parameter configurations and for a number of realistic and more idealized climate scenarios. All experiments start from the same initial H. sapiens density distribution in central Africa (see Fig. 2, upper left for pattern), representing idealized initial conditions 125 ka. The results are essentially insensitive to moderate 15 degree latitudinal and longitudinal shifts of the initial H. sapiens across central Africa. These initial dates are chosen to study how the prominent MIS5 precessionally paced openings of savannah-type corridors in northeastern Africa during HMW2 and 3 (Fig. 1) would have promoted the subsequent human dispersal out of Africa. By integrating Equation (1) forward in time and using different climate scenarios (see Extended Data Table 2) for Net primary productivity (N(x,y,t)), desert fraction (d(x,y,t)), surface temperature (T(x,y,t)) and land–sea distribution, we can determine the effects of orbital-scale and millennial-scale climate variability on the simulated arrival times of H. sapiens in various regions (Fig. 3) for the different scenarios. The following forcing scenarios are chosen (see Extended Data Tables 1, 2): (1) The standard early exit control run (scenario A) (see parameters in Extended Data Table 1, left column) simulates the evolution of H. sapiens for the past 125 thousand years in response to time varying T(x,y,t), N(x,y,t), d(x,y,t), which includes orbital-scale and millennial-scale variability. Furthermore, to capture an increased adaptability to climate stressors we reduce the values of T and increase those for d linearly during the simulation (see Extended Data Table 1). Furthermore, the mobility of H. sapiens increases as a function of time in terms of diffusion and coastal advection speeds (Extended Data Table 1, left column). The results are shown in Fig. 2 and Fig. 3a. The Late Holocene value of K = 42 km2 year−1 in this simulation is in the range of previous estimates31, 33, 94 for the Late Pleistocene H. sapiens and early- to mid-Holocene of K = 25–76 km2 year−1. Our growth rate of 0.4% per year is a factor 3–4 smaller than previous estimates32. The simulated population range in scenario A for Europe during the LGM of 0.6 million individuals is about 3 times larger than a previous estimate95. (2) The late exit (scenario B) uses the same overall configuration as Scenario A (including same initial condition), but different parameter values (see Extended Data Table 1, right column). The results are shown in Extended Data Fig. 7 and Fig. 3b. Most importantly, this scenario uses a higher sensitivity of human mortality to temperatures and a smaller initial diffusion rate. (3) A 125 ka simulation (no DO), based on scenario A, with only orbital-scale forcing T (x,y,t), N (x,y,t), d (x,y,t) from SIM and without millennial-scale variability associated with Dansgaard–Oeschger and Heinrich events. The arrival time results are shown in Fig. 3c and some representative time series in Fig. 1, yellow lines. (4) Four 125 thousand-year-long HDM simulations using constant climate forcing T (x,y,t = t ), N (x,y,t = t ), d (x,y,t = t ) with t  = 105, 70, 21 ka using the same parameters as in scenario A. The arrival time results are shown in Fig. 3d–f. The Matlab code of the human dispersion model is available upon request from the lead author.

Szopa S.,French Climate and Environment Sciences Laboratory | Balkanski Y.,French Climate and Environment Sciences Laboratory | Schulz M.,French Climate and Environment Sciences Laboratory | Schulz M.,Norwegian Meteorological Institute | And 14 more authors.
Climate Dynamics | Year: 2013

Global aerosol and ozone distributions and their associated radiative forcings were simulated between 1850 and 2100 following a recent historical emission dataset and under the representative concentration pathways (RCP) for the future. These simulations were used in an Earth System Model to account for the changes in both radiatively and chemically active compounds, when simulating the climate evolution. The past negative stratospheric ozone trends result in a negative climate forcing culminating at -0. 15 W m-2 in the 1990s. In the meantime, the tropospheric ozone burden increase generates a positive climate forcing peaking at 0. 41 W m-2. The future evolution of ozone strongly depends on the RCP scenario considered. In RCP4. 5 and RCP6. 0, the evolution of both stratospheric and tropospheric ozone generate relatively weak radiative forcing changes until 2060-2070 followed by a relative 30 % decrease in radiative forcing by 2100. In contrast, RCP8. 5 and RCP2. 6 model projections exhibit strongly different ozone radiative forcing trajectories. In the RCP2. 6 scenario, both effects (stratospheric ozone, a negative forcing, and tropospheric ozone, a positive forcing) decline towards 1950s values while they both get stronger in the RCP8. 5 scenario. Over the twentieth century, the evolution of the total aerosol burden is characterized by a strong increase after World War II until the middle of the 1980s followed by a stabilization during the last decade due to the strong decrease in sulfates in OECD countries since the 1970s. The cooling effects reach their maximal values in 1980, with -0. 34 and -0. 28 W m-2 respectively for direct and indirect total radiative forcings. According to the RCP scenarios, the aerosol content, after peaking around 2010, is projected to decline strongly and monotonically during the twenty-first century for the RCP8. 5, 4. 5 and 2. 6 scenarios. While for RCP6. 0 the decline occurs later, after peaking around 2050. As a consequence the relative importance of the total cooling effect of aerosols becomes weaker throughout the twenty-first century compared with the positive forcing of greenhouse gases. Nevertheless, both surface ozone and aerosol content show very different regional features depending on the future scenario considered. Hence, in 2050, surface ozone changes vary between -12 and +12 ppbv over Asia depending on the RCP projection, whereas the regional direct aerosol radiative forcing can locally exceed -3 W m-2. © 2012 The Author(s).

Ramarohetra J.,IPSL | Sultan B.,IPSL | Baron C.,CIRAD - Agricultural Research for Development | Gaiser T.,University of Bonn | Gosset M.,IRD GET
Agricultural and Forest Meteorology | Year: 2013

Rainfall monitoring via satellite sensors is particularly relevant for the agricultural sector of West Africa. Indeed, food shortages in this region are often caused by rainfall deficits and an early access to data available for the entire region can help to provide credible and timely information for better decision making. This study assesses the accuracy of state-of-the-art satellite rainfall retrievals for agriculture applications in two sites in Niger and Benin. Although these satellite data are widely used instead of rain gauge data for such applications, we found that, in a crop-modelling framework, their use can introduce large biases in crop yield simulations. Biases differ strongly among the four cultivars considered in both sites and are not simple extrapolation of each satellite product cumulative rainfall amount biases. In particular, we found that if an accurate estimation of the annual cumulative rainfall amount is important for yield simulations of pearl millet 'Souna 3' and 'Somno' cultivars in Niger, a realistic distribution of rainfall is also very important for predicting pearl millet 'Somno' and 'HK' yields in Niger as well as maize yields in Benin. Overall the satellite products tested, 3B42v6 appears to be the most suitable satellite product for our specific agricultural application since it minimizes both biases in rainfall distribution and in annual cumulative rainfall amount. For each crop and in both regions, biases in crop yield prediction are the highest when using non-calibrated satellite rainfall products (PERSIANN, 3B42RT, CMORPH and GSMAP). © 2013 Elsevier B.V.

Aumont O.,IPSL | Ethe C.,Institute Pierre et Simon Laplace | Tagliabue A.,University of Liverpool | Bopp L.,French Climate and Environment Sciences Laboratory | Gehlen M.,French Climate and Environment Sciences Laboratory
Geoscientific Model Development | Year: 2015

PISCES-v2 (Pelagic Interactions Scheme for Carbon and Ecosystem Studies volume 2) is a biogeochemical model which simulates the lower trophic levels of marine ecosystems (phytoplankton, microzooplankton and mesozooplankton) and the biogeochemical cycles of carbon and of the main nutrients (P, N, Fe, and Si). The model is intended to be used for both regional and global configurations at high or low spatial resolutions as well as for short-term (seasonal, interannual) and long-term (climate change, paleoceanography) analyses. There are 24 prognostic variables (tracers) including two phytoplankton compartments (diatoms and nanophytoplankton), two zooplankton size classes (microzooplankton and mesozooplankton) and a description of the carbonate chemistry. Formulations in PISCES-v2 are based on a mixed Monod-quota formalism. On the one hand, stoichiometry of C / N / P is fixed and growth rate of phytoplankton is limited by the external availability in N, P and Si. On the other hand, the iron and silicon quotas are variable and the growth rate of phytoplankton is limited by the internal availability in Fe. Various parameterizations can be activated in PISCES-v2, setting, for instance, the complexity of iron chemistry or the description of particulate organic materials. So far, PISCES-v2 has been coupled to the Nucleus for European Modelling of the Ocean (NEMO) and Regional Ocean Modeling System (ROMS) systems. A full description of PISCES-v2 and of its optional functionalities is provided here. The results of a quasi-steady-state simulation are presented and evaluated against diverse observational and satellite-derived data. Finally, some of the new functionalities of PISCES-v2 are tested in a series of sensitivity experiments. © Author(s) 2015.

Welsh B.Y.,University of California at Berkeley | Wheatley J.,University of California at Berkeley | Lallement R.,IPSL
Astronomy and Astrophysics | Year: 2012

Aims. We present medium resolution ultraviolet interstellar absorption measurements recorded with the HST-COS and FUSE spectrographs towards two post-AGB stars (K 559 & K 648) located within the M 15 globular cluster (l ∼ 65°, b ∼-27°). By sampling interstellar gas over the 10.4 kpc sight-line towards M 15 we wish to reveal spectral features that are associated with absorption due to the foreground g1 intermediate velocity cloud (IVC), whose distance has previously been constrained to lie between 1.8-3.8 kpc. Methods. Inspection of the UV line profiles recorded towards both stars have revealed measurable IVC absorption at V lsr ∼ +61.5 ± 5 k s -1 in the profiles of the CI, CII, CII*, CIV, NI, NII, OI, OVI, AlII, SiII, SiIII, SiIV, PII, SII, FeII, FeIII and NiII ions. Line-profile fitting to the IVC absorption features has resulted in column density determinations for the low and high ions. The best-fit column density values for the various ions have near identical values along both sight-lines, such that our UV data does not reveal the significant small-scale structure that has been reported previously for the g1 cloud. Results. The observed column density ratios of the CIV, SiIV, OVI and NV IV components are consistent with what may be expected from turbulent mixing layers, shock ionization or halo SNR models, although no one model can predict all of the observed ratios. We derive sub-solar metallicity values of [O/H] =-1.22 ± 0.44 and [N/H] =-1.21 ± 0.38 for the neutral IVC gas, and also the IV components of C, Si, Al, Ni and Fe mainly possess less than solar abundance ratios relative to that of N. A possible origin for the g1 IVC is that of a low metallicity cloud that has been accreted towards our galaxy, has passed through the disk and is now moving away from the galactic plane. © 2012 ESO.

Loading IPSL collaborators
Loading IPSL collaborators