News Article | May 2, 2017
A new study by scientists from the Institute of Atmospheric Physics and Nanjing University of Information Science and Technology investigates the trends in the mean state and the day-to-day variability (DDV) of the surface weather conditions over northern and northeastern China (NNEC) using CN05.1 observational data. During 1961-2014, the surface temperature (wind speed) increased (decreased) over NNEC and the DDV of the surface temperatures and wind speeds decreased, indicating a trend towards a stable, warm and windless state of the surface weather conditions over NNEC. This finding implies a trend towards more persistent hot and windless episodes, which threaten human health and aggravate environmental problems. The trends were also examined in reanalysis data. Both the ERA-40 and NCEP data showed an increasing (decreasing) trend in the mean state of the surface temperatures (wind speeds). However, the reanalysis data only showed a consistent decreasing trend in the DDV of the surface weather conditions in spring. The underlying reason for the decreased DDV of the surface weather conditions was further analyzed, focusing on the spring season. "Essentially, the decreased DDV of the surface weather conditions can be attributed to a decrease in synoptic-scale wave activity, which is quantified using the 2-7-day bandpass filtered daily SLP [sea level pressure] in this study," explains Dr. SUN Bo, first author of the study. The authors found that the decreased synoptic-scale wave activity was caused by a decrease in the baroclinic instability. There was a contrasting change in the baroclinic instability over East Asia, showing a decreasing (increasing) trend north (south) of 40°N. This contrasting change in the baroclinic instability was primarily caused by a tropospheric cooling zone over East Asia at approximately 40°N, which led to a decreased (increased) meridional temperature gradient over the regions to the north (south) of 40°N.
News Article | May 10, 2017
Various definitions of glacier runoff have been used20. Here I estimate the multi-decadal-average volume of water gained by the glaciers that accumulates in below-freezing conditions and so is ‘seasonally delayed’, that is, the average volume of water that must be released each summer from long-term (decades to centuries) storage as ice for the glaciers to remain in balance, which is equivalent to the average ice flux at the equilibrium line. I used a version of the 1951–2007 Asian Precipitation Highly-Resolved Observational Data Integration Towards Evaluation (APHRODITE) climate record of precipitation and temperature10, modified to improve the representation of snow accumulation on glaciers. APHRODITE was found to be the most suitable gridded precipitation dataset for water resource modelling in central Asia21, particularly when aggregated to monthly values22, but is biased at high altitudes where weather is poorly sampled (see, for example, ref. 23) and does not account for post-deposition reworking of snow and exchanges with the atmosphere. To correct for these shortcomings in regional APHRODITE precipitation, I use independent, multi-decadal mountain precipitation estimates derived from a major regional-scale analysis of glacier morphometry15 (hereafter ‘Sakai’ precipitation). Because these estimates were made by tuning the climate to the morphometry-defined equilibrium line altitude (ELA) of the glacier, they implicitly account for all mass-balance terms over multi-decadal timescales, which have combined to determine the ELA of the glacier, including precipitation, melt runoff, avalanching, drifting, sublimation and evaporation. Avalanching in particular is otherwise poorly observed, but is a locally important component of glacier accumulation24. Specifically, I use the ‘L-average’ precipitation for annual-average glacier accumulation (to include avalanching onto the glaciers) and ‘W-average’ precipitation for annual-average precipitation at high-altitude off-glacier regions, as defined in ref. 15. I rescale the APHRODITE aggregated monthly time series of precipitation needed for this study to agree with the Sakai annual averages for areas above 4,000 m altitude. This approach uses the spatial and temporal distribution of APHRODITE precipitation, but rescaled to agree with the long-term Sakai mass-balance distribution. It assumes that the APHRODITE dataset captures well the temporal and spatial distribution of precipitation, if not the quantity. I use unmodified APHRODITE below 4,000 m. The uncertainty in unmodified APHRODITE monthly aggregated precipitation has been assessed against observations in a lowland basin in northeast China, for which the correlation coefficient was 0.96, root-mean-square error (r.m.s.e.) was 20 mm (about 20% of the mean) and relative bias was −10% (ref. 22). In the mountainous upper Indus basin, the unmodified APHRODITE annual precipitation correlation coefficient is reported as 0.80 with a normalized r.m.s.e. of 60% (ref. 23). Within this area, unmodified APHRODITE in the mountainous Tarbela catchment is reported to underestimate mean annual precipitation by 222 mm (260 mm versus 482 mm) for 1998–201223, a local example of the bias that is corrected for regionally by using Sakai mountain precipitations. Compared to 13 observations of winter net accumulation (‘winter balance’) in Russia, Kyrgyzstan, Kazakhstan and China over periods ranging from 1 year to 20 years, Sakai on-glacier ‘L-average’ snowfall was found to have a r.m.s.e. of 281 mm (water-equivalent) relative to an average of 710 mm, and a correlation coefficient of 0.77 (significance P < 0.01). The unmodified APHRODITE r.m.s.e. was higher, at 494 mm, with a lower correlation coefficient of 0.15 (significance P > 0.1)15. This suggests a residual uncertainty in annual meltwater production in any one Sakai grid cell (0.5°) of about ±40%, and in sub-4,000-m-altitude precipitation in an APHRODITE grid cell (0.25°) of ±20%, with possible bias of about 10%. There is an average of 64 Sakai cells per river basin; hence, with zero assumed bias, basin-scale uncertainty may reduce to about 5%. However, reported data are insufficient in spatial or temporal sampling for a comprehensive uncertainty estimate either on- or off-glacier, and so I assume 20% uncertainty in precipitation and meltwater production on the basin scale. Following ref. 12, I distribute the runoff of the seasonally delayed water through the months with above-freezing monthly average temperatures at the area-weighted mean glacier terminus height of each river basin, with melt volume scaled according to the monthly sum of positive-degree days (PDDs) at this altitude, so that total annual melt equals total annual accumulation (Fig. 3). The APHRODITE temperature series is limited to daily average temperature; hence, PDDs in this case are summed positive daily average temperatures. However, the distribution of melt by month that I calculate is largely insensitive to the method of PDD calculation because, following ref. 12, I use the ratio of monthly PDDs to annual PDDs to assign melt volumes to months, not absolute PDDs. These simplifications enable a consistent approach to melt estimation on the regional scale and improve on a previous study12 by using a longer precipitation record (58 years rather than 30 years10, 25) with more comprehensive mass-balance contributions and a substantially revised glacier inventory11. To estimate melt production in the drought years (the years in the 1951–2007 record with the lowest basin-wide annual precipitation for each basin), I repeated this analysis but scaled annual melt according to the ratio of PDDs in the average and drought summers for each basin. This scaling changed the melt volume by −8% to +16%. I then compared the melt values to precipitation totals for the river basins, altitude zones (Extended Data Fig. 6) and dam catchments for these years to determine the gross melt fraction of total water inputs in drought. An increase in ice loss in the driest years of 16% equates to an additional regional-average lowering of glacial ablation areas of 1 m yr−1, which is small relative to the thickness of typical alpine glacial ablation areas (order 100 m) and so is readily sustainable for the duration of the meteorological droughts discussed here. To determine the decreasing glacial contribution with increasing distance from the glaciers (owing to greater dilution by precipitation), following ref. 11, I summed contributions over cumulatively larger zones (Extended Data Fig. 6)—above the snout of the lowest glacier (zone 5), and then this value plus the upper 25% of the remaining basin area downstream (jointly called zone 4), plus 50% (jointly called zone 3), plus 75% (zone 2) and plus 100% (zone 1). In addition, I defined individual hydrological catchments for a dataset of 604 hydropower and irrigation dam sites that are operational, under construction or are at an advanced stage of planning1, 18 using hydrological tools within ArcGIS software, and calculated the average and drought monthly inputs from glacial melt and precipitation for each defined catchment. The assumption that the glaciers are in steady state (no net gain or loss over the observation period) is necessary to estimate the seasonally delayed glacier runoff from the long-term averaged monthly precipitation and temperature. Recent observations show various mass imbalances across the region, ranging from strongly negative in the Nyainqêntanglha Shan and Spiti–Lahaul (Brahmaputra and Indus basins) to slightly positive in the western Kunlun Shan (Tarim basin) and Karakoram (the ‘Karakoram anomaly’; Indus basin), but span a short time period (for example, 2003–2008)26. On the multi-decadal timescale of this study, extensive regional observations of mass imbalances are lacking. Where multi-decadal data exist (for example, for the Tien Shan over the past 50 years27, the western Himalayas since 197728, 29, the eastern Himalayas since 196230 and at sparse sites over the broader region for various periods since 184029, 31) they show a dominant loss trend, and a comprehensive compilation of longer-term data shows regional losses for most years since at least the mid-twentieth century, with mass balances of approximately −0.35 m to −0.40 m water-equivalent per decade averaged over the full glacial extent32. Compared to the regional mass balance calculated here of 21.9 km3 of water annually over 84,800 km2, or 2.58 m water-equivalent per decade, this suggests a sustained imbalance of −14% to −16% of total mass balance over the study period. Consequently, the steady-state assumption leads to an underestimation of annual melt volumes by an average of 14%–16%. From the published analysis it is not possible to resolve mass imbalances for individual river basins, so I apply a +15% correction to all of the glacial melt volumes. Streamflow is commonly modelled as the result of precipitation, evaporation, surface runoff, infiltration, plant uptake, soil moisture redistribution, lateral flow and exchanges with shallow and deep aquifers, and is a useful measure of available channellized surface water in a catchment. Streamflow can be compared to river gauge measurements, which are used to calibrate and validate multiple model inputs and processes if a sufficiently long and reliable gauge record exists and the soil and land-cover properties of the model domain (typically an individual catchment) are adequately constrained33, 34, 35, 36. In HMA, however, gauges are sparse and often not well located for calibration or validation, for example, downstream of irrigated farmland, urban areas or dams with potentially substantial but unquantified artificial water extraction, storage and return flows. Hence, suitable gauge data are lacking on the regional scale. Without calibration and validation, biases in the model inputs and processes can accumulate, affecting the accuracy of streamflow magnitude and timing (see, for example, ref. 37). Rather than reporting modelled catchment streamflow with potentially large but poorly known accumulated uncertainty, here I used a hydrology model only to estimate the ratios of modelled evaporation to precipitation over the glacial areas and of modelled evapotranspiration (hereafter ‘evaporation’) to precipitation on the catchment scale. I used these ratios to estimate the losses from the gross glacier and precipitation water inputs for each catchment, calculated separately, to give net water inputs. These net inputs are the fundamental controls on catchment water supply. Water losses are dominated by evaporation, which is temperature-dependent; hence, loss rates from glacial melt occurring at high altitudes may differ from the basin average38. To estimate evaporation, I used the physically based Penman–Monteith approach within the Soil and Water Assessment Tool (SWAT) hydrology model (ArcSWAT version 2012.10_1.18)39, with soil data from the Harmonized World Soil Database (version 1.2, http://www.fao.org/soils-portal/soil-survey/soil-maps-and-databases/harmonized-world-soil-database-v12/en/), land cover data from WaterBase40, and driven by daily weather parameters for 1979–2014 (aggregated from hourly data from the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR)41) with a 5-year spin-up period. SWAT requires parameters that are available from NCEP-CFSR in addition to the precipitation and temperature provided by the APHRODITE datasets used elsewhere in this study. I estimated the mean summer (July–September) and annual evaporation rates for sub-basins within the upper reaches (zone 3) of the Indus, Ganges, Brahmaputra, Aral and Balkhash basins, chosen to encompass the majority of dam catchments in this study. In all cases, summer evaporation rates over the glaciers were lower than the basin average; consequently, losses from precipitation inputs (11%–129%) were higher than from glacial melt inputs (3%–19%; Extended Data Table 3). (Upper Aral precipitation losses of more than 100% in this case result from high rates of evaporation and NCEP-CFSR precipitation exceeding the rescaled AHRODITE precipitation, interpreted here to imply that summer evaporation is approximately equal to summer precipitation.) In addition, I estimated summer river-channel evaporation losses using SWAT-modelled reach evaporation rates summed over the mean flow-path length from all glacier-snout locations to the watershed outlet (for example, 853 km for the Tarbela dam catchment, upper Indus), but these were small (<1% of melt volume) and would apply equally to precipitation and melt. I subtracted the average glacier and basin evaporation losses from the calculated glacial melt and precipitation volumes to give net dam-catchment water inputs and hence the NMF (Figs 4, 5). To test the modelled evaporation results, I applied SWAT modelling to the Bhakra dam catchment of the upper Sutlaj River, a tributary of the Indus, for the period 1986–1996, and compared modelled annual-mean evaporation to estimates based on observations from an evaporating pan extrapolated over the catchment using correlations with temperature for the same period42. The observed evaporation rates yielded an estimate of 126–160 mm, which agrees with the SWAT-modelled average (this study) of 138 mm. Reasonable agreement (<10% deviation) has also been reported between evaporation observations and uncalibrated SWAT model output for a large catchment in the upper Mississippi basin43. I further tested the sensitivity of evaporation volumes to precipitation volumes, which are uncertain, by re-scaling the daily NCEP-CFSR precipitations for the upper Indus to match the monthly rescaled APHRODITE precipitations used elsewhere in this study. In this extreme case, the NCEP-CFSR annual average precipitation was approximately double the rescaled APHRODITE (753 mm versus 358 mm), but evaporation was similar (122 mm versus 110 mm), which indicates that evaporation is relatively insensitive to uncertain precipitation. Furthermore, the consistency of modelled year-to-year evaporation (s.d. = 15 mm) relative to the much more variable modelled interannual precipitation (s.d. = 75 mm), and their relatively low correlation (R2 = 0.36), supports this interpretation. Subsequently, I used unmodified daily NCEP-CFSR precipitation data in modelling evaporation losses. More rigorous calculation of evaporation losses from the surfaces of glaciers would require detailed and widespread data on glacier debris-cover properties and surface energy and mass balance, which are lacking. SWAT does not explicitly model the processes occurring at glacier surfaces, but does accumulate and melt snow cover through time and account for surface and sub-surface water budgets for snow and various land-cover and soil types. Good performance in other alpine catchments suggests low conceptual model errors in the dominant modelled processes in such environments37. Here I assume that these ablation areas behave similarly to bare ground of the same lithology as the surrounding terrain, which is the origin of the supraglacial debris cover. I assessed the sensitivity of evaporation rates to land-cover type by reclassifying glacier areas from bare ground to wetlands (that is, with readily available surface water). The change in evaporation for this class was small (for example, 15.0 mm versus 14.9 mm for July–September), which suggests that the assumed surface properties are not dominating evaporation rates. Comprehensive evaporation uncertainty estimates would require further independent data in addition to the Sutlaj study described above, but this comparison of rates from different surface classes provides confidence in the absolute values calculated here and hence in the contrast in evaporation rates at glacier altitudes versus the basin average, which are more important to the conclusions of this study. This (summer) evaporation contrast agrees broadly with a Himalaya-wide study that, for 2000–2006, estimated annual evaporation rates in high-elevation, mountainous catchments to be typically <10% of annual rainfall (<100 mm per year) and in the low-lying Ganges foreland to be up to 30% (more than 300 mm per year)44. A fraction of water inputs (precipitation and melt) contributes to aquifer recharge rather than to surface flows directly and so might be available only where springs re-emerge at the surface (shallow aquifer return flow), where plants draw water back up from the aquifer or where groundwater is extracted from either shallow or deep aquifers. These flows are poorly constrained, but SWAT modelling suggests that in this study, order 10% of precipitation reaches the shallow aquifer. From here, order 1% of precipitation reaches the deep aquifer. Of the shallow-aquifer water, most (about 85%) returns to surface flows and the rest is transpired. I used the following publicly available software in this study: ArcGIS 10.1 with Spatial Ecology GME plugin (http://www.spatialecology.com/gme/); ArcSWAT version 2012.10_1.18 (http://swat.tamu.edu/software/arcswat/); and CDO Climate Data Operators (https://code.zmaw.de/projects/cdo). I used the following published datasets: Randolph RGI50 (version 5, http://www.glims.org/RGI/); SRTM version 4.1 (http://www.cgiar-csi.org/data/srtm-90m-digital-elevation-database-v4-1); APHRODITE APHRO_MA_025_V1101 and APHRO_MA_TAVE_025deg_V1204R1 (http://www.chikyu.ac.jp/precip/english/products.html); river basins from the Global Runoff Data Centre, Germany (2007) (http://www.bafg.de/GRDC/EN/02_srvcs/22_gslrs/221_MRB/riverbasins.html?nn=201570); NCEP-CFSR global weather data for SWAT (http://globalweather.tamu.edu/); WaterBase Landuse Maps (http://www.waterbase.org/download_data.html); Harmonized World Soil Database version 1.2 (http://www.fao.org/soils-portal/soil-survey/soil-maps-and-databases/harmonized-world-soil-database-v12/en/); and United Nations Environment Programme (UNEP) 2015 population density data for Asia, as compiled from World Population Prospects, the 2012 Revision (WPP2012) (http://ede.grid.unep.ch). With permission of the authors, I used datasets from ref. 1 for dams and from ref. 15 for mountain (Sakai) precipitation. Source Data for Figs 2, 3, 5 and 6 and Extended Data Figs 3, 5 and 7 are available in the online version of the paper.
News Article | December 6, 2016
Things in the Arctic are just getting weirder and weirder. And not in a good way. Freakishly high air and ocean temperatures during November caused sea ice to trail far behind typical levels, with sea ice extent ending the month at a record low. Sea ice extent averaged 3.51 million square miles for the month, which was 753,000 square miles below the 1981-2010 average for the period, according to data released Tuesday by the National Snow and Ice Data Center (NSIDC) in Boulder, Colorado. The section of missing ice was about the same size as the entire country of Mexico. Or to put it in terms of U.S. states, the missing ice is greater than the states of Texas, California, Montana and New Mexico combined. SEE ALSO: Large parts of West Antarctic Ice Sheet could collapse 'in our lifetimes' During part of the month, sea ice actually declined when it would normally be growing with the arrival of the polar winter. The decline ate away 19,300 square miles of ice in an area of the Barents Sea, north of Norway, Finland and eastern Russia. According to the NSIDC, this large decrease in ice extent is nearly unprecedented for the month since satellite records began in 1979, though a far smaller dip occurred in 2013. Remarkably, November is the seventh month this year to hit a record low ice extent, falling unusually far below the average — 3.2 standard deviations, to be exact. November's ice extent was even more unusually below the norm than September 2012, which was when Arctic sea ice hit its all-time record low. The reasons for the record lows have to do with prevailing weather patterns that have been — and in fact still are — pumping unusually high temperatures into the Arctic, along with longterm human-caused climate change that is propelling sweeping changes across the Far North. In addition, climate feedback loops are making it harder for sea ice to recover from missing summer ice, since the ocean waters retain a memory of the ice loss in the form of added heat absorbed from the sun. That heat is released relatively slowly during November, contributing to higher air temperatures and boosting precipitation in some areas through evaporation. “It looks like a triple whammy—a warm ocean, a warm atmosphere, and a wind pattern all working against the ice in the Arctic,” said NSIDC director Mark Serreze, in a statement. From northeast of Greenland toward Svalbard and Severnaya Zemlya in Norway and Russia, air temperatures averaged up to 18 degrees Fahrenheit above average for the month, which is an astonishingly large monthly anomaly. On individual days, some areas saw temperatures that were more than 40 degrees Fahrenheit above average. For the next week to 10 days, air temperature departures from average could reach 50 degrees Fahrenheit in parts of the Arctic as this weather pattern continues. This is the #Arctic sea ice story —> long-term decline in total volume Modeled (PIOMAS) Novembers from 1979 to 2016 [lowest on record]... pic.twitter.com/eIlpRL9J5t A look at the spatial distribution of November air temperature anomalies (925 mb) over the satellite era... (NCEP/NCAR reanalysis) pic.twitter.com/jxA30MgoRX Julienne Stroeve, an NSIDC researcher, was in Svalbard during November and noted the lack of sea ice. “Typically sea ice begins to form in the fjords at the beginning of November, but this year there was no ice to be found,” she said. Sea ice melt means more damaging waves hitting perilous Arctic villages in Alaska and elsewhere, fewer opportunities for walrus and polar bears to hunt for their prey and potentially altered weather patterns across the entire northern hemisphere. “It’s startling to see how little sea ice is out there right now — a frightening situation for Arctic wildlife such as walruses and polar bears that need sea ice habitat for feeding and breeding," Margaret Williams, managing director for US Arctic programs at the World Wildlife Fund said in a statement. "As what will likely be the hottest year on record comes to an end, Arctic temperatures have been off the charts." The story of sea ice in the Southern Hemisphere has been radically different than that in the Far North, with a longterm increase in the sea ice surrounding the Antarctic continent. This has often been a talking point for those who doubt the mainstream science on human-caused climate change, since sea ice was growing. However, in November, the bottom dropped out on Antarctic sea ice, hitting a record low due to a shift in circumpolar winds and record high temperatures. The average extent for November was 699,000 square miles below the 1981 to 2010 average, which was more than twice the previous record departure from average set in November 1986 and a staggering 5.7 standard deviations below the longterm average. “Antarctic sea ice really went down the rabbit hole this time. There are a few things we can say about what happened, but we need to look deeper,” said NSIDC scientist Ted Scambos said in a statement. Overall, scientists remain more concerned about Arctic sea ice in a warming world, partly because it defines the circumpolar region. With Antarctica, the continent's land-based ice sheets are the bigger worry, since they could cause catastrophic sea level rise if they were to melt rapidly. The Northern and Southern Hemisphere ice records mean that global ice extent has also hit a record low, but as the NSIDC noted, that metric is not useful for explaining what is going on at either pole, and is not an accurate way to track longterm ice trends due to global warming. BONUS: Ship's voyage would not have happened without global warming
News Article | February 15, 2017
We used a data-assimilating ocean circulation inverse model (OCIM)2, 16 to estimate the mean ocean circulation during three different time periods: pre-1990, the decade of the 1990s, and the period 2000–2014, which we refer to respectively as the 1980s, 1990s and 2000s. For each time period, we assimilated observations of five tracers: potential temperature, salinity, the chlorofluorocarbons CFC-11 and CFC-12, and Δ14C. Potential temperature and salinity data were taken from the 2013 World Ocean Database, Ocean Station Data and Profiling Floats data sets. The observations were binned by time period and then averaged onto the model grid. Quality control was performed by removing outliers (more than four inter-quartile ranges above the upper quartile) at each depth level in the model. This removed less than 0.1% of the observations. CFC-11, CFC-12 and Δ14C observations were taken from the Global Ocean Data Analysis Project version 2 (GLODAPv2) database30. These data were already quality-controlled. We used an earlier version of the GLODAPv2 database, but checking it against the newest release we find that the correlation R2 of the fit between the CFC-11 and CFC-12 observations in each version is over 0.99. The only major difference between the version used and the newest version of GLODAPv2 is that the latter includes data from two additional cruise tracks in the Indian Ocean. The CFC-11 and CFC-12 observations were binned by time period and then averaged onto the model grid. We assimilated Δ14C observations only where they were paired with a near-zero CFC-11 or CFC-12 measurement (CFC-11 < 0.05 pmol kg−1, CFC-12 < 0.025 pmol kg−1). This was done to remove Δ14C observations that may have been contaminated by bomb-produced 14C, since we model only the ‘natural’ (pre-1955 bomb) component of Δ14C. These Δ14C observations constrain the ventilation of deep water masses, and the same Δ14C observations were used in each of the three assimilation periods. Extended Data Fig. 7 shows the spatial distribution of the CFC observations for each decadal period, as well as the temporal distribution of observations of CFCs, temperature, and salinity. The spatial distributions of temperature and salinity are not shown, but all regions are well sampled for all time periods. Almost all of the transects with CFC observations in the 1990s were re-occupied with repeat hydrographies during the 2000s. During the 1980s, in contrast, several large areas are missing CFC observations. In particular, during the 1980s there are no CFC observations in the Pacific and Indian sectors of the Southern Ocean. For these sectors, the inferred circulation changes from the 1980s to the 1990s must therefore be interpreted cautiously. Nonetheless, the model-predicted weakening of the Southern Ocean CO sink during the 1990s is in good agreement with independent studies using atmospheric inverse models10 and prognostic ocean general circulation models8, 19. This suggests that the more densely sampled temperature and salinity data, in conjunction with CFC data from elsewhere, may be able to compensate for a lack of CFC data in the Southern Ocean during the 1980s. The sporadic nature of the oceanographic observations, particularly the CFC measurements (with some transects being occupied only about once per decade) makes the data assimilation susceptible to temporal aliasing. The error bars reported here do not take into account the uncertainty due to this potential aliasing of interannual variability into the data-assimilated circulations. Aliasing errors are likely to be largest for the smallest regions, and those with the sparsest observations. This must be kept in mind when interpreting the results of the assimilation model, particularly those on smaller spatial scales (for example, regional CO fluxes of Fig. 2). On the other hand, these aliasing effects will be minimized when integrating over larger areas. Thus we would expect, for example, that the global CO fluxes diagnosed by the assimilation model will be largely free from aliasing errors. Finally, we note that in the Arctic Ocean and Mediterranean Sea, a combination of the small basin area and lack of data constraints causes the model CO simulations to exhibit some numerical artefacts. We therefore do not include these regions in our analysis. We use an inversion procedure previously used to estimate the climatological mean state of the ocean circulation2, 16, and follow the methods used in those studies with a few exceptions, as detailed here. Here we break the assimilation down into three time periods: pre-1990, 1990–1999 and 2000–2014. We use the same dynamical forcing (wind stress and baroclinic pressure gradient forcing) for each time period. Then, tracer data from each period is assimilated independently to arrive at an estimate of the mean ocean circulation state during each period. This guarantees that the diagnosed circulation differences between time periods are due solely to information carried in the oceanographic tracer fields themselves, and not to assumptions about changes in external forcing. For each assimilation time period, we adjust a set of control parameters to minimize the misfit between observed and modelled tracer concentrations2, 16. We note that this method yields a diagnostic, rather than predictive, estimate of ocean circulation within each assimilation time period. The approach therefore differs from that of standard coupled climate models such as those participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). The CMIP5 models rely on the accuracy of external forcing and model physics to produce an accurate ocean state estimate. They therefore have relatively high spatial resolution (approximately 0.5°–1°), resolve temporal variability on sub-daily timescales, and employ relatively sophisticated model physics. The OCIM, on the other hand, does not rely so much on the accuracy of external forcing or internal physics, but rather on the assimilation of global tracer data sets to produce an accurate ocean state estimate. To make this data assimilation tractable, the OCIM has relatively coarse resolution (2°), does not resolve temporal variability within assimilation time periods, and uses simplified linearized physics2. The advantage of the OCIM relative to CMIP5 models is that the resulting circulation estimate is consistent with the observed tracer distributions, while the disadvantage is in its relatively coarse resolution and assumption of steady-state within each assimilation period. In the OCIM, tracer concentrations C are simulated by solving the transport equation where A is a matrix transport operator built from the model-estimated horizontal and vertical velocities and imposed diffusive terms, and S(C) is a source–sink term. For the tracers simulated here the only sources and sinks are due to air-sea exchange, and except for the radioactive decay of 14C they are conservative away from the surface layer. The source–sink term for these tracers takes the form which is non-zero only in the surface layer of the model (of thickness δz ). The piston velocity K and the surface saturation concentration C vary for each tracer. For potential temperature and salinity, K = δz /(30d), and C is carried as a control (optimizable) parameter16, that is allowed to vary between assimilation time periods, but is held constant within each time period. For CFC-11 and CFC-12, K is modelled as a quadratic function of wind speed 10 m above the sea surface, u (ref. 31) where a is a constant piston-velocity coefficient (consistent with a wind speed in metres per second and a piston velocity in centimetres per hour), f is the fractional sea-ice cover, and Sc is the temperature-dependent Schmidt number. The 10-m wind speed and fractional sea-ice cover are taken from NCEP reanalysis for 1948–2014 and averaged for each year. For u the annual average is computed from daily values following the OCMIP-2 procedure32, which takes into account short-term variability in wind speeds. The surface saturation (C ) concentrations for CFC-11 and CFC-12 are computed from the observed time- and latitude-dependent atmospheric CFC-11 and CFC-12 concentrations33 using a temperature- and salinity-dependent solubility34. For the solubility we use time-independent temperatures and salinities from the 2009 World Ocean Atlas annual climatology35, 36. For CFC-11, our simulation runs from 1945 to 2014, and for CFC-12 from 1936 to 2014. Values for u and f before 1948 are set to their 1948 values. Natural radiocarbon is modelled in terms of the ratio R = Δ14C/1,000 + 1. The source–sink term of R takes the form The first term on the right-hand side represents the air–sea exchange with a well-mixed atmosphere of R = 1 (that is, Δ14C = 0‰) with a timescale τ = 5 years, and is applied only in the top model layer. This simple parameterization neglects spatial variability in 14C fluxes due to varying surface DIC and/or CO fluxes, but is judged adequate for our purposes, because the Δ14C constraint is needed only to constrain the approximate ventilation age distribution of the deep ocean, so that a reasonable distribution of respired DIC can be simulated. The second term on the right-hand side of equation (4) represents the radioactive decay of 14C, with e-folding time τ = 8,266 years, and is active throughout the water column. Biological sources and sinks of Δ14C are neglected, because they have been shown to have a small effect on Δ14C (ref. 37). For most of the simulations here, we used a piston velocity coefficient of a = 0.27, following ref. 38. To test the sensitivity of our results to this value, we ran a set of assimilations with a increased by 30%, which is closer to the original OCMIP-2 value of a = 0.337 (ref. 32). In these assimilations we also reduced the value of τ for the radiocarbon simulation by 30%, to be consistent with the higher assumed piston velocity. To get a sense of the uncertainty due to prescribed diffusivities, we also ran the model with different values of the isopycnal and vertical diffusivities, K and K . In all, we ran five different models with different values of a, K , and K . Supplementary Table 1 summarizes the fit to observations for each of these models for each assimilation period. Extended Data Figs 8 and 9 show the zonally averaged difference between model-simulated and observed potential temperature (Extended Data Fig. 8) and CFC-11 (Extended Data Fig. 9) for the Atlantic and Pacific basins during each assimilation time period. The model-data residuals are small (generally less than 1 °C for potential temperature, and 0.5 pmol kg−1 for CFC-11), but there are some biases. In the Atlantic, simulated potential temperatures are slightly too high in the northern subtropical thermocline, in the Southern Ocean upwelling region, and in the region of Antarctic Intermediate Water formation. Potential temperatures are slightly too low in the North Atlantic and in most of the thermocline. In the Pacific, these patterns are similar (Extended Data Fig. 8). Cooler-than-observed high latitudes are to be expected owing to the lack of seasonal cycle in the OCIM, which biases temperatures towards end-of-winter values. The most obvious bias in the CFC-11 field is a slight (about 0.25 pmol kg−1) underprediction throughout most of the upper ocean. More negative biases (about 1 pmol kg−1) occur in the surface of the Southern Ocean, the North Atlantic and the North Pacific (Extended Data Fig. 9). These negative biases could indicate that the CFC-11 piston velocity that we used for most simulations is too small. Because the same piston velocity was used for all assimilation periods, this would not affect the inferred circulation-driven changes in the CO sink. Importantly, the spatial patterns of the model-data residuals are similar in all three assimilation time periods. This temporal coherence in the model-data residuals indicates that the inferred circulation changes do not introduce spurious biases into the assimilation. Our approach approximates the decadal variability of the ocean circulation by fitting a steady-state circulation independently for each time period. We thus neglect both interannual variability within, and temporal variations before, the assimilation period. However, the integrated effect of all previous circulation changes is encoded in the tracer distributions of the assimilation period, and therefore indirectly ascribed to an effective decadal circulation representative of the assimilation period. To test whether these separate steady-state circulations for each time period capture the effects of the time-varying circulation, we used the data-assimilated circulations to simulate ocean CFC-11 concentrations, changing the circulation on the fly from decade to decade as the CFC-11 is propagated to the period of interest. We find that this approach fits the CFC-11 observations in each period much better than an unchanging circulation (Extended Data Fig. 10), which indicates that an unchanging circulation from decade-to-decade is not consistent with the tracer data. This also indicates that changing the circulation on the fly from decade to decade, as we did in our CO simulations (see below), provides a good approximation to the effect of the continuously changing circulation of the ocean. To investigate the influence of changing ocean circulation on the oceanic CO sink we first simulated the pre-industrial carbon distribution (before 1765) by assuming that the ocean was in equilibrium with an atmospheric CO concentration of 278 parts per million. We then simulated the transient evolution of dissolved inorganic carbon (DIC) from 1765 to 2014 using observed atmospheric CO concentrations as a boundary condition2. For this simulation, the ocean circulation is assumed to be steady-state before 1990 at its 1980s estimate, and is then switched abruptly to the assimilated circulations for the 1990s and 2000s. We acknowledge the approximate nature of this approach—the real ocean circulation changes gradually. We therefore present only decadally averaged results for the 1980s, 1990s and 2000s, rather than focusing on particular years. We estimated uncertainty by varying the parameters of the carbon-cycle model over a wide range of values. In all, we ran 32 simulations with different combinations of parameters governing the production and remineralization of particulate and dissolved organic carbon and calcium carbonate (Supplementary Table 2). Combined with five separate circulation estimates, we have 160 state estimates from which the uncertainties are derived. For all simulations we used the OCMIP-2 formulation of the ocean carbon cycle39, implemented for the matrix transport model as described elsewhere40. The governing equation for the oceanic DIC concentration is where A is the matrix transport operator; J is the virtual flux of DIC due to evaporation and precipitation; J represents the air–sea gas exchange of CO ; and J are the biological transformations of DIC (uptake and remineralization of particulate and dissolved organic carbon). To compute the gas exchange fluxes of CO we must also simulate alkalinity—the equation for alkalinity follows equation (5) but without the air–sea exchange term. For our simulations, the only terms that vary from one time period to the next are A (owing to variability in ocean circulation) and J (owing to variability in the atmospheric CO concentration and in the gas exchange piston velocity). The virtual fluxes and biological fluxes of DIC are held constant over time at their pre-industrial values, so that we can isolate the effects of ocean circulation variability on the oceanic CO sink. Air–sea CO gas exchange occurs in the surface layer and is given by where the piston velocity is parameterized following equation (3). The CO saturation concentrations are computed using observed temperature and salinity and the observed atmospheric . For the results presented in the main-text figures and in Extended Data Figs 3 and 4, we ignored changes in the solubility of CO due to changes in SST and salinity, in order to isolate changes in ocean CO uptake due to ocean circulation variability. For these simulations we calculated [CO ]sat using the mean SST and salinity from the 2009 World Ocean Atlas objectively mapped climatologies35, 36. Atmospheric is taken from ref. 41 for the years 1765–2012, and from the Mauna Loa CO record for the years 2013–2014. The virtual fluxes J and the biological carbon fluxes J follow the OCMIP-2 design39, and are implemented for the matrix transport model using a Newton solver as described elsewhere40. Model parameters governing the biological cycling of carbon are listed in Supplementary Table 2. We allow for uncertainty in the parameters z (the compensation depth, above which DIC uptake is parameterized by restoring to observed PO concentrations and multiplying by the globally constant ratio of C to P, r ); the decay rate κ of labile dissolved organic phosphorus; the exponent b in the assumed power-law dependence of particle flux on depth42; the CaCO :POC ‘rain ratio’ r; and the e-folding depth d for CaCO dissolution. These parameters are varied over a wide range to account for the range of values found in the literature32, 39, 40, 43, 44, 45, 46, 47, 48, 49, 50, and are presented in Supplementary Table 2. Note that we do not vary σ, the fraction of production routed to dissolved organic phosphorus, because previous studies found that variations in κ and σ have very similar effects on DIC and alkalinity distributions40. It is therefore sufficient to vary only κ. We also do not vary r or r , as their values vary spatially in reality and are probably sensitive to the circulation which controls nutrient availability. These complexities are ignored here for expediency, and the biological cycling of DIC is assumed to be constant and unchanging, in order to isolate the direct effects of circulation changes. To isolate the effects of circulation variability on the oceanic CO sink (as in Figs 2 and 3), we ran two additional simulations which held the circulation at 1980s levels during the 1990s, and at 1990s levels during the 2000s. The anomalous CO flux attributed to changing circulation during the 1990s was calculated as the difference between the 1990s CO fluxes for the simulation in which the circulation was switched in 1990, and that in which the circulation remained at 1980s levels. Likewise, the anomalous CO flux attributed to changing circulation during the 2000s was calculated as the difference between the 2000s CO fluxes for the simulation in which the circulation was switched in 2000, and that in which the circulation remained at 1990s levels. To diagnose the contribution of thermal effects on air-sea CO fluxes, we also ran a suite of simulations in which we allowed [CO ]sat to vary from one decade to the next owing to changes in SST. For these simulations, we calculated the decadally averaged SST for the 1980s, 1990s and 2000s from two different reconstructions, the Centennial In situ Observation-Based Estimates (COBE)51 and the Extended Reconstructed Sea Surface Temperature version 4 (ERSSTv4)52. For each decade, we calculated the anomaly with respect to the 1980s, and then added this anomaly to the climatological SST used in the model during the 1990s and 2000s. This yielded two separate reconstructed SST histories, which were used to compute the CO saturation in separate simulations. Each simulation was run with each of the five different versions of our circulation model, yielding ten state estimates from which uncertainties were derived. The results of these simulations were then compared to otherwise identical simulations in which SSTs were held constant, and the difference between the two was attributed to thermal effects on CO solubility. These differences are presented in Extended Data Fig. 5. Data for the assimilation model were obtained from the World Ocean Database 2013 (temperature and salinity), available at https://www.nodc.noaa.gov/OC5/WOD13/, and the GLODAPv2 database30 (radiocarbon and CFCs) are archived at the Carbon Dioxide Information Analysis Center (CDIAC; http://cdiac.ornl.gov/oceans/GLODAPv2/). Mapped SST36 and salinity35 climatologies were obtained from the 2009 World Ocean Atlas at https://www.nodc.noaa.gov/OC5/WOA09/pr_woa09.html. The NOAA_ERSST_v452 and COBE-SST251 data are provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from their website at http://www.esrl.noaa.gov/psd/. NCEP reanalysis data were obtained from http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surfaceflux.html. The Mauna Loa CO record used in our carbon cycle model is available at the NOAA Earth System Research Laboratory at http://www.esrl.noaa.gov/gmd/ccgg/trends/. Data from the SOCOM project4, 5, 15, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 are available at http://www.bgc-jena.mpg.de/SOCOM/. All data used to create the figures in this paper will be archived at CDIAC (http://cdiac.ornl.gov/). Code may be obtained by contacting T.D. (email@example.com).
Vukovic A.,University of Belgrade |
Rajkovic B.,University of Belgrade |
Modelling for Environment's Sake: Proceedings of the 5th Biennial Conference of the International Environmental Modelling and Software Society, iEMSs 2010 | Year: 2010
Land Ice Sea Surface model (LISS) is a new model for prediction of soil temperature and soil moisture. It is a part of the Non-hydrostatic Multi-scale Model on B-grid (NMM-B). The skin temperature, that represents the temperature of the interface between ground and air, is calculated from surface energy balance. It includes total influence of the soil processes and vegetation cover. Evapotranspiration is parameterized with β parameter that takes into account evaporation from the bare soil, evaporation from interception reservoir and transpiration of the plants. Model has four layers and one or more layers for snow, depending on its amount. Soil temperatures are calculated using Fourier diffusion law and water content using Darcy law. LISS has been tested using two different data sets (Caumont, France 1986; Bondville, USA 1998) as well as against NOAH-LSM simulations. Annual balance of energy and water showed numerical stability. The annual diurnal variation of surface temperature is close to the observed value. RMSE for the surface temperature is 1.9°C for Bonville site. Surface fluxes in 36-hour period of snow growth simulations for Bondville are close to the observed values.
Hanna S.R.,Hanna Consultants |
Reen B.,Pennsylvania State University |
Hendrick E.,Epsilon Associates Inc. |
Santos L.,Air Quality Associates |
And 7 more authors.
Boundary-Layer Meteorology | Year: 2010
The objective of the study is to evaluate operational mesoscale meteorological model atmospheric boundary-layer (ABL) outputs for use in the Hazard Prediction Assessment Capability (HPAC)/Second-Order Closure Integrated Puff (SCIPUFF) transport and dispersion model. HPAC uses the meteorological models' routine simulations of surface buoyancy flux, winds, and mixing depth to derive the profiles of ABL turbulence. The Fifth-Generation Pennsylvania State University/National Center for Atmospheric Research Mesoscale Model (MM5) and the Weather Research and Forecast-Nonhydrostatic Mesoscale Model (WRF-NMM) ABL outputs and the HPAC ABL parameterisations are compared with observations during the International H2O Project (IHOP). The meteorological models' configurations are not specially designed research versions for this study but rather are intended to be representative of what may be used operationally and thus have relatively coarse lowest vertical layer thicknesses of 59 and 36 m, respectively. The meteorological models' simulations of mixing depth are in good agreement (±20%) with observations on most afternoons. Wind speed errors of 1 or 2 ms-1 are found, typical of those found in other studies, with larger errors occurring when the simulated centre of a low-pressure system is misplaced in time or space. The hourly variation of turbulent kinetic energy (TKE) is well-simulated during the daytime, although there is a meteorological model underprediction bias of about 20-40%. At night, WRF-NMM shows fair agreement with observations, and MM5 sometimes produces a very small default TKE value because of the stable boundary-layer parameterisation that is used. The HPAC TKE parameterisation is usually a factor of 5-10 high at night, primarily due to the fact that the meteorological model wind-speed output is at a height of 30 m for MM5 and 18 m for WRF-NMM, which is often well above the stable mixing depth. It is concluded that, before meteorological model TKE fields can be confidently used by HPAC, it would help to improve vertical resolution near the surface, say to 10 m or less, and it would be good to improve the ABL parameterisations for shallow stable conditions. © Springer Science+Business Media B.V. 2009.