News Article | May 10, 2017
Various definitions of glacier runoff have been used20. Here I estimate the multi-decadal-average volume of water gained by the glaciers that accumulates in below-freezing conditions and so is ‘seasonally delayed’, that is, the average volume of water that must be released each summer from long-term (decades to centuries) storage as ice for the glaciers to remain in balance, which is equivalent to the average ice flux at the equilibrium line. I used a version of the 1951–2007 Asian Precipitation Highly-Resolved Observational Data Integration Towards Evaluation (APHRODITE) climate record of precipitation and temperature10, modified to improve the representation of snow accumulation on glaciers. APHRODITE was found to be the most suitable gridded precipitation dataset for water resource modelling in central Asia21, particularly when aggregated to monthly values22, but is biased at high altitudes where weather is poorly sampled (see, for example, ref. 23) and does not account for post-deposition reworking of snow and exchanges with the atmosphere. To correct for these shortcomings in regional APHRODITE precipitation, I use independent, multi-decadal mountain precipitation estimates derived from a major regional-scale analysis of glacier morphometry15 (hereafter ‘Sakai’ precipitation). Because these estimates were made by tuning the climate to the morphometry-defined equilibrium line altitude (ELA) of the glacier, they implicitly account for all mass-balance terms over multi-decadal timescales, which have combined to determine the ELA of the glacier, including precipitation, melt runoff, avalanching, drifting, sublimation and evaporation. Avalanching in particular is otherwise poorly observed, but is a locally important component of glacier accumulation24. Specifically, I use the ‘L-average’ precipitation for annual-average glacier accumulation (to include avalanching onto the glaciers) and ‘W-average’ precipitation for annual-average precipitation at high-altitude off-glacier regions, as defined in ref. 15. I rescale the APHRODITE aggregated monthly time series of precipitation needed for this study to agree with the Sakai annual averages for areas above 4,000 m altitude. This approach uses the spatial and temporal distribution of APHRODITE precipitation, but rescaled to agree with the long-term Sakai mass-balance distribution. It assumes that the APHRODITE dataset captures well the temporal and spatial distribution of precipitation, if not the quantity. I use unmodified APHRODITE below 4,000 m. The uncertainty in unmodified APHRODITE monthly aggregated precipitation has been assessed against observations in a lowland basin in northeast China, for which the correlation coefficient was 0.96, root-mean-square error (r.m.s.e.) was 20 mm (about 20% of the mean) and relative bias was −10% (ref. 22). In the mountainous upper Indus basin, the unmodified APHRODITE annual precipitation correlation coefficient is reported as 0.80 with a normalized r.m.s.e. of 60% (ref. 23). Within this area, unmodified APHRODITE in the mountainous Tarbela catchment is reported to underestimate mean annual precipitation by 222 mm (260 mm versus 482 mm) for 1998–201223, a local example of the bias that is corrected for regionally by using Sakai mountain precipitations. Compared to 13 observations of winter net accumulation (‘winter balance’) in Russia, Kyrgyzstan, Kazakhstan and China over periods ranging from 1 year to 20 years, Sakai on-glacier ‘L-average’ snowfall was found to have a r.m.s.e. of 281 mm (water-equivalent) relative to an average of 710 mm, and a correlation coefficient of 0.77 (significance P < 0.01). The unmodified APHRODITE r.m.s.e. was higher, at 494 mm, with a lower correlation coefficient of 0.15 (significance P > 0.1)15. This suggests a residual uncertainty in annual meltwater production in any one Sakai grid cell (0.5°) of about ±40%, and in sub-4,000-m-altitude precipitation in an APHRODITE grid cell (0.25°) of ±20%, with possible bias of about 10%. There is an average of 64 Sakai cells per river basin; hence, with zero assumed bias, basin-scale uncertainty may reduce to about 5%. However, reported data are insufficient in spatial or temporal sampling for a comprehensive uncertainty estimate either on- or off-glacier, and so I assume 20% uncertainty in precipitation and meltwater production on the basin scale. Following ref. 12, I distribute the runoff of the seasonally delayed water through the months with above-freezing monthly average temperatures at the area-weighted mean glacier terminus height of each river basin, with melt volume scaled according to the monthly sum of positive-degree days (PDDs) at this altitude, so that total annual melt equals total annual accumulation (Fig. 3). The APHRODITE temperature series is limited to daily average temperature; hence, PDDs in this case are summed positive daily average temperatures. However, the distribution of melt by month that I calculate is largely insensitive to the method of PDD calculation because, following ref. 12, I use the ratio of monthly PDDs to annual PDDs to assign melt volumes to months, not absolute PDDs. These simplifications enable a consistent approach to melt estimation on the regional scale and improve on a previous study12 by using a longer precipitation record (58 years rather than 30 years10, 25) with more comprehensive mass-balance contributions and a substantially revised glacier inventory11. To estimate melt production in the drought years (the years in the 1951–2007 record with the lowest basin-wide annual precipitation for each basin), I repeated this analysis but scaled annual melt according to the ratio of PDDs in the average and drought summers for each basin. This scaling changed the melt volume by −8% to +16%. I then compared the melt values to precipitation totals for the river basins, altitude zones (Extended Data Fig. 6) and dam catchments for these years to determine the gross melt fraction of total water inputs in drought. An increase in ice loss in the driest years of 16% equates to an additional regional-average lowering of glacial ablation areas of 1 m yr−1, which is small relative to the thickness of typical alpine glacial ablation areas (order 100 m) and so is readily sustainable for the duration of the meteorological droughts discussed here. To determine the decreasing glacial contribution with increasing distance from the glaciers (owing to greater dilution by precipitation), following ref. 11, I summed contributions over cumulatively larger zones (Extended Data Fig. 6)—above the snout of the lowest glacier (zone 5), and then this value plus the upper 25% of the remaining basin area downstream (jointly called zone 4), plus 50% (jointly called zone 3), plus 75% (zone 2) and plus 100% (zone 1). In addition, I defined individual hydrological catchments for a dataset of 604 hydropower and irrigation dam sites that are operational, under construction or are at an advanced stage of planning1, 18 using hydrological tools within ArcGIS software, and calculated the average and drought monthly inputs from glacial melt and precipitation for each defined catchment. The assumption that the glaciers are in steady state (no net gain or loss over the observation period) is necessary to estimate the seasonally delayed glacier runoff from the long-term averaged monthly precipitation and temperature. Recent observations show various mass imbalances across the region, ranging from strongly negative in the Nyainqêntanglha Shan and Spiti–Lahaul (Brahmaputra and Indus basins) to slightly positive in the western Kunlun Shan (Tarim basin) and Karakoram (the ‘Karakoram anomaly’; Indus basin), but span a short time period (for example, 2003–2008)26. On the multi-decadal timescale of this study, extensive regional observations of mass imbalances are lacking. Where multi-decadal data exist (for example, for the Tien Shan over the past 50 years27, the western Himalayas since 197728, 29, the eastern Himalayas since 196230 and at sparse sites over the broader region for various periods since 184029, 31) they show a dominant loss trend, and a comprehensive compilation of longer-term data shows regional losses for most years since at least the mid-twentieth century, with mass balances of approximately −0.35 m to −0.40 m water-equivalent per decade averaged over the full glacial extent32. Compared to the regional mass balance calculated here of 21.9 km3 of water annually over 84,800 km2, or 2.58 m water-equivalent per decade, this suggests a sustained imbalance of −14% to −16% of total mass balance over the study period. Consequently, the steady-state assumption leads to an underestimation of annual melt volumes by an average of 14%–16%. From the published analysis it is not possible to resolve mass imbalances for individual river basins, so I apply a +15% correction to all of the glacial melt volumes. Streamflow is commonly modelled as the result of precipitation, evaporation, surface runoff, infiltration, plant uptake, soil moisture redistribution, lateral flow and exchanges with shallow and deep aquifers, and is a useful measure of available channellized surface water in a catchment. Streamflow can be compared to river gauge measurements, which are used to calibrate and validate multiple model inputs and processes if a sufficiently long and reliable gauge record exists and the soil and land-cover properties of the model domain (typically an individual catchment) are adequately constrained33, 34, 35, 36. In HMA, however, gauges are sparse and often not well located for calibration or validation, for example, downstream of irrigated farmland, urban areas or dams with potentially substantial but unquantified artificial water extraction, storage and return flows. Hence, suitable gauge data are lacking on the regional scale. Without calibration and validation, biases in the model inputs and processes can accumulate, affecting the accuracy of streamflow magnitude and timing (see, for example, ref. 37). Rather than reporting modelled catchment streamflow with potentially large but poorly known accumulated uncertainty, here I used a hydrology model only to estimate the ratios of modelled evaporation to precipitation over the glacial areas and of modelled evapotranspiration (hereafter ‘evaporation’) to precipitation on the catchment scale. I used these ratios to estimate the losses from the gross glacier and precipitation water inputs for each catchment, calculated separately, to give net water inputs. These net inputs are the fundamental controls on catchment water supply. Water losses are dominated by evaporation, which is temperature-dependent; hence, loss rates from glacial melt occurring at high altitudes may differ from the basin average38. To estimate evaporation, I used the physically based Penman–Monteith approach within the Soil and Water Assessment Tool (SWAT) hydrology model (ArcSWAT version 2012.10_1.18)39, with soil data from the Harmonized World Soil Database (version 1.2, http://www.fao.org/soils-portal/soil-survey/soil-maps-and-databases/harmonized-world-soil-database-v12/en/), land cover data from WaterBase40, and driven by daily weather parameters for 1979–2014 (aggregated from hourly data from the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR)41) with a 5-year spin-up period. SWAT requires parameters that are available from NCEP-CFSR in addition to the precipitation and temperature provided by the APHRODITE datasets used elsewhere in this study. I estimated the mean summer (July–September) and annual evaporation rates for sub-basins within the upper reaches (zone 3) of the Indus, Ganges, Brahmaputra, Aral and Balkhash basins, chosen to encompass the majority of dam catchments in this study. In all cases, summer evaporation rates over the glaciers were lower than the basin average; consequently, losses from precipitation inputs (11%–129%) were higher than from glacial melt inputs (3%–19%; Extended Data Table 3). (Upper Aral precipitation losses of more than 100% in this case result from high rates of evaporation and NCEP-CFSR precipitation exceeding the rescaled AHRODITE precipitation, interpreted here to imply that summer evaporation is approximately equal to summer precipitation.) In addition, I estimated summer river-channel evaporation losses using SWAT-modelled reach evaporation rates summed over the mean flow-path length from all glacier-snout locations to the watershed outlet (for example, 853 km for the Tarbela dam catchment, upper Indus), but these were small (<1% of melt volume) and would apply equally to precipitation and melt. I subtracted the average glacier and basin evaporation losses from the calculated glacial melt and precipitation volumes to give net dam-catchment water inputs and hence the NMF (Figs 4, 5). To test the modelled evaporation results, I applied SWAT modelling to the Bhakra dam catchment of the upper Sutlaj River, a tributary of the Indus, for the period 1986–1996, and compared modelled annual-mean evaporation to estimates based on observations from an evaporating pan extrapolated over the catchment using correlations with temperature for the same period42. The observed evaporation rates yielded an estimate of 126–160 mm, which agrees with the SWAT-modelled average (this study) of 138 mm. Reasonable agreement (<10% deviation) has also been reported between evaporation observations and uncalibrated SWAT model output for a large catchment in the upper Mississippi basin43. I further tested the sensitivity of evaporation volumes to precipitation volumes, which are uncertain, by re-scaling the daily NCEP-CFSR precipitations for the upper Indus to match the monthly rescaled APHRODITE precipitations used elsewhere in this study. In this extreme case, the NCEP-CFSR annual average precipitation was approximately double the rescaled APHRODITE (753 mm versus 358 mm), but evaporation was similar (122 mm versus 110 mm), which indicates that evaporation is relatively insensitive to uncertain precipitation. Furthermore, the consistency of modelled year-to-year evaporation (s.d. = 15 mm) relative to the much more variable modelled interannual precipitation (s.d. = 75 mm), and their relatively low correlation (R2 = 0.36), supports this interpretation. Subsequently, I used unmodified daily NCEP-CFSR precipitation data in modelling evaporation losses. More rigorous calculation of evaporation losses from the surfaces of glaciers would require detailed and widespread data on glacier debris-cover properties and surface energy and mass balance, which are lacking. SWAT does not explicitly model the processes occurring at glacier surfaces, but does accumulate and melt snow cover through time and account for surface and sub-surface water budgets for snow and various land-cover and soil types. Good performance in other alpine catchments suggests low conceptual model errors in the dominant modelled processes in such environments37. Here I assume that these ablation areas behave similarly to bare ground of the same lithology as the surrounding terrain, which is the origin of the supraglacial debris cover. I assessed the sensitivity of evaporation rates to land-cover type by reclassifying glacier areas from bare ground to wetlands (that is, with readily available surface water). The change in evaporation for this class was small (for example, 15.0 mm versus 14.9 mm for July–September), which suggests that the assumed surface properties are not dominating evaporation rates. Comprehensive evaporation uncertainty estimates would require further independent data in addition to the Sutlaj study described above, but this comparison of rates from different surface classes provides confidence in the absolute values calculated here and hence in the contrast in evaporation rates at glacier altitudes versus the basin average, which are more important to the conclusions of this study. This (summer) evaporation contrast agrees broadly with a Himalaya-wide study that, for 2000–2006, estimated annual evaporation rates in high-elevation, mountainous catchments to be typically <10% of annual rainfall (<100 mm per year) and in the low-lying Ganges foreland to be up to 30% (more than 300 mm per year)44. A fraction of water inputs (precipitation and melt) contributes to aquifer recharge rather than to surface flows directly and so might be available only where springs re-emerge at the surface (shallow aquifer return flow), where plants draw water back up from the aquifer or where groundwater is extracted from either shallow or deep aquifers. These flows are poorly constrained, but SWAT modelling suggests that in this study, order 10% of precipitation reaches the shallow aquifer. From here, order 1% of precipitation reaches the deep aquifer. Of the shallow-aquifer water, most (about 85%) returns to surface flows and the rest is transpired. I used the following publicly available software in this study: ArcGIS 10.1 with Spatial Ecology GME plugin (http://www.spatialecology.com/gme/); ArcSWAT version 2012.10_1.18 (http://swat.tamu.edu/software/arcswat/); and CDO Climate Data Operators (https://code.zmaw.de/projects/cdo). I used the following published datasets: Randolph RGI50 (version 5, http://www.glims.org/RGI/); SRTM version 4.1 (http://www.cgiar-csi.org/data/srtm-90m-digital-elevation-database-v4-1); APHRODITE APHRO_MA_025_V1101 and APHRO_MA_TAVE_025deg_V1204R1 (http://www.chikyu.ac.jp/precip/english/products.html); river basins from the Global Runoff Data Centre, Germany (2007) (http://www.bafg.de/GRDC/EN/02_srvcs/22_gslrs/221_MRB/riverbasins.html?nn=201570); NCEP-CFSR global weather data for SWAT (http://globalweather.tamu.edu/); WaterBase Landuse Maps (http://www.waterbase.org/download_data.html); Harmonized World Soil Database version 1.2 (http://www.fao.org/soils-portal/soil-survey/soil-maps-and-databases/harmonized-world-soil-database-v12/en/); and United Nations Environment Programme (UNEP) 2015 population density data for Asia, as compiled from World Population Prospects, the 2012 Revision (WPP2012) (http://ede.grid.unep.ch). With permission of the authors, I used datasets from ref. 1 for dams and from ref. 15 for mountain (Sakai) precipitation. Source Data for Figs 2, 3, 5 and 6 and Extended Data Figs 3, 5 and 7 are available in the online version of the paper.
Zagar N.,University of Ljubljana |
Tribbia J.,U.S. National Center for Atmospheric Research |
Anderson J.L.,U.S. National Center for Atmospheric Research |
Raeder K.,U.S. National Center for Atmospheric Research |
Kleist D.T.,National Centers for Environmental Prediction
Quarterly Journal of the Royal Meteorological Society | Year: 2010
This paper applies the normal-mode functions for the three-dimensional diagnosis of systematic analysis increments in the operational systems of ECMWF and NCEP, in the NCEP/NCAR reanalyses and in the ensemble data assimilation system DART/CAM which is developed at NCAR. Non-zero systematic increments are interpreted as the analysis system bias. The main region of tropospheric biases in all systems is the Tropics; most of the large-scale tropical bias resides in the unbalanced (inertio-gravity) motionwiththeeastward-propagatingcomponentbeingdominant insomedatasets. Themagnitudes of troposphericwind-fieldbiases inJuly 2007 were in the range between 1m s1 and 2 ms1, illustrating the importance of diagnosing analysis systems in the Tropics where magnitudes of the large-scale variability are of the same order. The systematic increments integrated over the whole models' atmosphere appear predominantly balanced; this balance is associated with biases in the zonally averaged balanced state and, in case of ECMWF and NCEP, with the biases at the model top levels. Spectra of analysis increments averaged over one month show that, on average, 20% to 45% of total energy in increment fields belongs to the inertio-gravity motions. © 2010 Royal Meteorological Society.
Spada M.,Barcelona Supercomputing Center |
Jorba O.,Barcelona Supercomputing Center |
Perez Garcia-Pando C.,NASA |
Perez Garcia-Pando C.,Columbia University |
And 3 more authors.
Atmospheric Chemistry and Physics | Year: 2013
One of the major sources of uncertainty in model estimates of the global sea-salt aerosol distribution is the emission parameterization. We evaluate a new sea-salt aerosol life cycle module coupled to the online multiscale chemical transport model NMMB/BSC-CTM. We compare 5 yr global simulations using five state-of-the-art sea-salt open-ocean emission schemes with monthly averaged coarse aerosol optical depth (AOD) from selected AERONET sun photometers, surface concentration measurements from the University of Miami's Ocean Aerosol Network, and measurements from two NOAA/PMEL cruises (AEROINDOEX and ACE1). Model results are highly sensitive to the introduction of sea-surface- temperature (SST)-dependent emissions and to the accounting of spume particles production. Emission ranges from 3888 Tg yr-1 to 8114 Tg yr -1, lifetime varies between 7.3 h and 11.3 h, and the average column mass load is between 5.0 Tg and 7.2 Tg. Coarse AOD is reproduced with an overall correlation of around 0.5 and with normalized biases ranging from +8.8% to +38.8%. Surface concentration is simulated with normalized biases ranging from -9.5% to +28% and the overall correlation is around 0.5. Our results indicate that SST-dependent emission schemes improve the overall model performance in reproducing surface concentrations. On the other hand, they lead to an overestimation of the coarse AOD at tropical latitudes, although it may be affected by uncertainties in the comparison due to the use of all-sky model AOD, the treatment of water uptake, deposition and optical properties in the model and/or an inaccurate size distribution at emission. © 2013 Author(s).
News Article | April 27, 2016
Measurements on board the C-130 aircraft during the NOMADSS field campaign included HONO, pNO , NO , O , BrO, IO, OH radicals, HO radicals, RO radicals, aerosol surface area densities (for particle diameter <1 μm), VOCs, photolysis frequencies, and other meteorology parameters. Extended Data Table 1 summarizes the instrumentation, time resolution, detection limit, accuracy and references31, 32, 33, 34, 35, 36, 37, 38, 39, 40 for our measurements. HONO was measured by two long-path absorption photometric (LPAP) systems based on the Griess–Saltzman reaction31. Briefly, ambient HONO was scrubbed by deionized water in a 10-turn glass coil sampler. The scrubbed nitrite was then derivatized with 5 mM sulfanilamide (SA) and 0.5 mM N-(1-naphthyl)-ethylene-diamine (NED) in 40 mM HCl, to form an azo dye within 5 min. The azo dye was detected by light absorbance at 540 nm using a fibre optic spectrometer (LEDSPEC-4, World Precision Instruments) with a 1-m liquid waveguide capillary flow cell (World Precision Instruments). ‘Zero-HONO’ air was generated by pulling ambient air through a Na CO -coated denuder to remove HONO and was sampled by the systems periodically to establish measurement baselines. Interference from NO , peroxyacetyl nitrate (PAN) and particulate nitrite, if any, was corrected by subtracting the baseline from the ambient air signal. Owing to the low collection efficiency of these interfering species in the sampling coil and their low concentrations, the combined interference signal was estimated to be less than 10% of the total signal in the clean MBL. Potential interference from peroxynitric acid (HO NO ) was suppressed by heating the Teflon perfluoroalkoxy alkanes (PFA) sampling line to 50 °C with a residence time of 0.8 s. The HO NO steady-state concentration in the MBL was estimated to be less than 1 p.p.t.v. at a temperature of 23–26 °C in both flights, and thus interference from HO NO was negligible41. Overall, the instrument baseline in the clean MBL was stable and low, and clear and strong ambient air signals (approximately ten times the detection limit) were observed. The accuracy of HONO measurements was confirmed by comparison with limb-scanning differential optical absorption spectroscopy (DOAS)36. The agreement between these two instruments was very good in wide power plant plumes, where HONO mixing ratios exceeded the lower detection limits of both instruments (Extended Data Fig. 3). HNO and pNO were quantitatively collected with a coil sampler and a frited glass disk sampler, respectively. The collected nitrate in the two channels were reduced to nitrite by two cadmium (Cd) columns, and determined using two LPAP systems31, 32. Zero air was generated to establish measurement baselines: for HNO by passing the ambient air through a NaCl-coated denuder to remove HNO , and for pNO through a Teflon filter and a NaCl-coated denuder to remove aerosol particles and HNO . Potential interference from HONO, NO and PAN was corrected by subtracting the baselines from the ambient air signals. Ozone measurements were unavailable on 8 July 2013 and OH radical measurements were unavailable on 5 July 2013 owing to instrument malfunction. Steady-state OH radical concentrations were calculated and used in the budget analysis when OH radical measurements were not available42. Most of the parameters were observed at similar values during both flights, indicating that both flights captured the primary features of the local chemical environment. The lack of OH and O measurements on the different flights had a negligible impact on our analysis. Several spikes in NO and aerosol surface area density detected from ship exhaust were excluded from the analysis. Seventy-two-hour back-trajectories were calculated for both flights (Extended Data Fig. 1) with the Lagrangian particle dispersion model FLEXPART43, version 9.02 (http://flexpart.eu), using six-hourly meteorological analysis data of the Global Forecasting System of the National Centers for Environmental Prediction (http://nomads.ncep.noaa.gov/txt_descriptions/GFS_half_degree_doc.shtml), interlaced with 3-h forecasts (0:00 utc, 3:00 utc, 6:00 utc, 9:00 utc, 12:00 utc, 15:00 utc, 18:00 utc and 21:00 utc), at a horizontal resolution of 0.5°. Every 5 min during the research flight 10,000 particles were released at the then-current position of the NSF/NCAR C-130 and followed back in time for 72 h. A ‘particle’ here refers to an infinitesimally small parcel of air, which is only affected by three-dimensional transport, turbulence and convection, and does not have any removal processes (no deposition, washout, sedimentation, chemical losses). Centroids shown in the figures are based on an algorithm44 that reduces the residence probability distribution resulting from the locations of the 10,000 particles into five probable locations at each time interval. One aerosol sample was collected using a Teflon filter (Sartorius, pore size 0.45 μm, diameter 47 mm) on-board the NSF/NCAR C-130 aircraft during every research flight from 30 min after takeoff to 30 min before landing. The total sampling volume ranged from 1.1 m3 to 1.5 m3 depending on the flight length (ranging from 6 h to 8 h). The filter sample was wrapped in aluminium foil and stored in a refrigerator until use. The pNO photolysis rate constant was determined using the filter sample in the laboratory. The photochemical experiments were conducted using a cylindrical flow reactor (inner diameter 10 cm, depth 1.5 cm) with a quartz window on the top, at a temperature of 21.0 ± 1.0 °C and a relative humidity of (50 ± 2)%. The filter sample was moved into the flow cell directly from the freezer for the photochemical experiment. Compressed air was purified by flowing through an activated charcoal and Purafil chemisorbant column (Purafil) to remove NO , VOCs, H S, SO , HNO and HONO and was used as carrier gas. Gaseous products, HONO and NO , released during the experiment were flushed out of the reactor by the carrier gas, and were sampled by two coil samplers connected in series. The first 10-turn coil sampler scrubbed HONO with purified water at a collection efficiency of 100% (ref. 31), and the second 32-turn coil sampler was to scrub NO with an acetic acid modified SA/NED solution at a collection efficiency of 60% (ref. 45). The scrubbed nitrite and NO were converted to an azo dye with SA/NED reagents and analysed by two separate LPAP systems31, 45. The filter sample was exposed to the solar simulator radiation for 10 min; baselines were established for both HONO and NO before and after the light exposure. Photochemical production rates of HONO and NO were calculated from their time-integrated signals above the baselines over the period of light exposure. To correct for HONO and NO production from photolysis of HNO deposited on the flow reactor wall surface, a control experiment was conducted by irradiating the empty flow reactor. The control signals were subtracted from the sample exposure signals when calculating the production rates of HONO and NO from pNO photolysis. After 10 min of light exposure, nitrate in the filter sample was extracted with 15 ml 1% NH Cl buffer solution (pH = 8.5), reduced to nitrite by a Cd column and determined by LPAP. A 300-W Ceramic xenon arc lamp (Perkin Elmer, model PE300BUV) was used as a light source. The ultraviolet light below 290 nm and the heat-generating infrared light were removed by a Pyrex glass filter and a water filter, respectively. The parabolic light beam irradiated only a circular area of a 1-inch (2.54 cm) diameter in the centre of the flow reactor. The shape of the spectrum of the filtered light source is similar to that of the solar actinic spectrum in the MBL (Extended Data Fig. 4). The effective light intensity in the centre of the flow reactor under direct irradiation was measured to be about 3.5 times higher than that at tropical noon on the ground (solar elevation angle θ = 0°)1, 18, using a nitrate actinometer18. The production rates of HONO and NO were normalized by the amount of pNO exposed and the effective ultraviolet light intensity, to obtain the normalized photolysis rate constants of pNO , (ref. 5). where P and are the production rates of HONO and NO , respectively; N is the light-exposed pNO amount determined in the extraction solution; J is the photolysis rate constant of nitrate in the actinometer solution exposed to the experimental light source; and the ‘standard’ photolysis rate constant of aqueous nitrate (J ≈ 3.0 × 10−7 s−1) is assumed for typical tropical summer conditions on the ground (solar elevation angle θ = 0°)1, 18, corresponding to a gas-phase HNO photolysis rate constant ( ) of ~7.0 × 10−7 s−1. HONO is the major product from pNO photolysis, with an average production ratio of HONO to NO of 2.0. It should be noted that the value is calculated from the production rates of HONO and NO , and thus it may underestimate the photolysis rate constant of pNO if other products, such as NO, are produced. However, NO was only a minor secondary product from the photolysis of HONO and NO , and accounted for ~1% of the total production of HONO and NO in the flow reactor in a resident time of 8 s. Therefore, lack of NO measurement in this study would not affect the measurement. The overall uncertainty in the photolysis rate constant measurement is about 50%, taking into account the measurement uncertainties in production rates of HONO and NO , nitrate loading and effective ultraviolet light intensity. It should be pointed out that the laboratory-determined represents merely an average photolysis rate constant of pNO collected during the entire flight covering a wide geographic area, and thus does not reflect the possible temporal and spatial variability in the actual during the flight. In addition, the photochemical reactivity of bulk aerosol sample collected on the filter may be altered during sampling and handling, and thus the laboratory-determined using the bulk aerosol sample might be different from that under the ambient conditions. Nevertheless, the laboratory-determined is still the first and best estimate of the pNO photolysis rate constant in the ambient atmosphere so far, and is in reasonable agreement with the values inferred from field observations (Fig. 2). To evaluate the global recycling NO source from particulate nitrate photolysis, more aerosol samples need to be collected from different atmospheric environments all over the world, and better-designed experiments need to be conducted using these samples to more accurately determine the photolysis rates constants of particulate nitrate under conditions similar to those of ambient atmosphere. The air masses encountered consisted mostly of aged air masses circulating within the boundary layer of the North Atlantic Ocean under a Bermuda high-pressure system for several days before reaching the measurement locations (Extended Data Fig. 1). Within this relatively isolated air mass in the MBL, the cycling of pNO –HONO–NO was occurring much more rapidly, of the order of hours during the day, than dry deposition, which was of the order of days. Therefore, the diurnal chemistry of pNO –HONO–NO in the MBL can be simulated by a zero-dimensional box model. Model simulations were conducted using the MCM v3.2 updated with the Jet Propulsion Laboratory’s latest recommended kinetics10, 11, 46, and constrained by the field-measured long-lived species, such as O , VOCs and total NO . The model was initiated from 00:00 local time and was allowed to run for three diurnal cycles (72 h). The diurnal concentration profiles of short-lived species, including OH, HO , HONO and NO , were affected by their initial concentrations only during the first diurnal cycle of simulation. The results presented here (Fig. 3, Extended Data Figs 2 and 5) were from the second diurnal cycle of the simulation run. To simulate the time-varying cycling of pNO –HONO–NO , the diurnal profiles of photolysis frequencies were also calculated by the MCM v3.2 and scaled by the measured values in the early afternoon. The time-dependent photolysis rate constant of particulate nitrate ( ) at certain times of the day was calculated as follows: where is the photolysis rate constant of gas-phase HNO measured in the MBL. The ratio of pNO to HNO was set to 1, as observed in the MBL during the early afternoon. The photolysis of pNO is considered a source of both HONO and NO . To include all the possible HONO sources in the model, the following reactions are also considered: the gas-phase reactions of OH with NO, of excited NO with H O (refs 26 and 27), and of HO ·H O with NO (refs 16 and 28), and the heterogeneous reactions of NO on sea-salt aerosol particles. Upper-limit HONO formation rates from reactions of excited NO with H O and of HO ·H O with NO are calculated using the latest recommendations27, 28. An upper-limit uptake coefficient of 10−4 was assumed for NO uptake on sea-salt aerosol47. HONO sinks in the model include its photolysis and the gas-phase reaction with OH radicals. The photolysis of pNO is a NO source mainly through HONO as an intermediate. Other NO sources, such as the photolysis of gaseous HNO and the reaction of gaseous HNO with OH, are negligible, and are not included here. NO sinks include the formation of bromine nitrate (BrONO ), iodine nitrate (IONO ) and organic nitrate (RONO ) as well as HNO through the reactions of NO with OH and of NO with HO . The uptake of halogen nitrate and organic nitrates on the sea-salt aerosols and following hydrolysis provide effective pathways for conversion of NO to particulate nitrate in addition to HNO uptake. The photolysis lifetime of BrONO in the gas phase was estimated to be ~12 min in the MBL, somewhat longer than the uptake and hydrolysis lifetime of ~6 min using an uptake coefficient of 0.8 on sea-salt aerosols46. Therefore, two-thirds of BrONO is converted to pNO through hydrolysis on the aerosol particle at midday. The photolysis lifetime of IONO in the gas phase was estimated to be ~5 min (ref. 48), comparable to its uptake and hydrolysis lifetime46, 49. Therefore, half of IONO is converted to pNO via hydrolysis on the aerosol particles at midday. BrO was measured at 1.5 p.p.t.v. by the DOAS instrument. The IO level was below the detection limit and was assumed to be at the typical MBL concentration49, 50 of 0.5 p.p.t.v. To evaluate the impact of VOCs on reactive nitrogen chemistry, explicit methane chemistry is considered in the model. Higher VOCs are lumped into a single species, RH. The concentration of RH is adjusted by the sum of the abundance of individually measured VOCs (R H) scaled by the ratio of their rate coefficients for OH+R H to that for OH+CH . The products of OH+RH reactions lead to the formation of RO radicals, carbonyls, peroxides, and organic nitrates. The yield of organic nitrate from the reaction of RO radical with NO (reaction 3) is calculated based on the reaction rate of CH O radical with NO ( ) and the effective branching ratio β. where [R O ] and [CH O ] are the concentrations of various R O radicals formed from oxidation of various VOCs and methane, respectively; k and β are the recommended rate constants and branching ratios (β ) for various RO radicals, respectively30, 51. The effective branching ratio β is estimated to be ~7% for our model. The fate of RONO in the MBL is not well known; hydrolysis has been assumed to be an effective sink of organic nitrates in forested regions30. In our model, an uptake coefficient of 10−2 and a 100% nitrate yield from RONO uptake is assumed for organic nitrates on sea-salt aerosol. The model is initialized with typically measured parameters and is run at 15-min time steps for three diurnal cycles. Multiple runs are used to constrain the model uncertainty by error propagation from the errors of all considered parameters. Gaussian error propagation is applied for the uncertainty calculation. Overall, a model uncertainty of 28% (±1 s.d.) is calculated. The reactive nitrogen species are found to be reproducible for multiple days without considering deposition and transport. Sensitivity studies with the model indicate that the calculated HONO concentrations are relatively insensitive (≤2%) to the following assumed parameters: the RONO branching ratio (varying from 1% to 7% to 14%), the uptake coefficients of RONO and XONO (changing up and down by a factor of two), the nitrate yield from RONO hydrolysis (varying from 5% to 100%), the initial concentration of non-methane hydrocarbons (changing up and down by a factor of two), and the RO +NO reaction rate constant (changing up and down by a factor of two). Changing up and down by a factor of two in the initial concentrations of BrO or IO leads to a change of up to 4% in the calculated HONO. Finally, the sensitivity of the model output to the ratio of HNO to pNO was assessed by varying between 0.5 and 2.0, and the modelled HONO varied between 1.17 and 0.74 of the value for the ratio of unity. Despite the simplicity of this model approach, the good agreement between model calculations and field observations for key species (Fig. 3 and Extended Data Fig. 5) shows its feasibility and usefulness for assessing the chemistry of radicals and reactive nitrogen in the MBL.
Janjic Z.,National Centers for Environmental Prediction |
Janjic T.,Alfred Wegener Institute for Polar and Marine Research |
Vasic R.,University Corporation for Atmospheric Research
Monthly Weather Review | Year: 2011
Starting from three Eulerian second-order nonlinear advection schemes for semi-staggered Arakawa grids B/E, advection schemes of fourth order of formal accuracy were developed. All three second-order advection schemes control the nonlinear energy cascade in case of nondivergentflow by conserving quadratic quantities. Linearization of all three schemes leads to the same second-order linear advection scheme. The second-order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by modifying the advected quantity while still preserving consistency. Tests with linear advection of a cone confirm the advantage of the fourth-order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth-order scheme disappears. The new nonlinear fourth-order schemes are quadratic conservative and reduce to the Arakawa Jacobian for advected quantities in case of nondivergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second-order schemes. However, for nondivergent flow, the conservation properties of the fourth-order schemes cannot be proven in the same way as those of the original second-order schemes. Therefore, demanding long-term and low-resolution nonlinear tests were carried out in order to investigate how well the fourth-order schemes control the nonlinear energy cascade. All schemes were able to maintain meaningful solutions throughout the test. Finally, the impact was examined of the fourth-order momentum advection on global medium-range forecasts. The 500-hPa anomaly correlation coefficient obtained using the best performing fourth-order scheme did not show an improvement compared to the tests using its second-order counterpart. © 2011 American Meteorological Society.
News Article | January 6, 2017
Active Atlantic hurricane periods may not produce intense hurricanes that threaten the U.S. East Coast. This phenomenon owes it to the protective shield formed by cooler ocean temperatures and vertical wind shear at the East Coast of United States, which helps in weakening the intensity of incoming hurricanes. This was revealed in a new research published in Nature. The research was conducted at the University of Wisconsin-Madison by James Kossin, a federal atmospheric research scientist at the UW. He studied the relative intensity of hurricanes in the low and high Atlantic hurricane periods by factoring in many unstudied elements. In the research, Kossin pursued a novel approach, departing from the traditional focus on tropical Atlantic as the chief hurricane development region by pulling in hurricane-producing conditions outside that region. Cooler sea surface temperatures work in conjunction with faster wind shears to produce hurricane seasons. That is why the intensity of hurricanes is ascertained by tracking ocean temperatures and wind shear. Kossin's research extrapolated the incongruity of hurricanes formed in the Atlantic basin getting weakened upon landfall. To explain this anomaly, the scientist explored other patterns and dug up past data to establish related patterns. He studied the data spanning from 1947 to 2015, which gave information on location, central pressure, and maximum winds. Also, the data from the National Centers for Environmental Prediction and the National Center for Atmospheric Research had a benchmark of sea temperatures and wind shear of the sample period. Kossin inferred that even as tropics produce hurricanes when they hit the U.S. coasts, an erosion of energy takes place after being stonewalled by higher wind shear and a cooler ocean temperature. "They have to track through a gauntlet of high shear to reach the coast and many of them stop intensifying," said Kossin. In fact, he calls it a natural mechanism that kills hurricanes threatening the U.S. coast. This also implies that during quieter periods in the Atlantic basin, lower wind shears will take away that protective barrier at the coast. That is why Kossin is cautioning that even if fewer hurricanes may be lashing the coast during that low period, they can be stronger and come under category 3 to category 5 range. The study has been able to explain why there has been no major hurricane attack in the United States for more than 11 years. Kossin also mentioned that Hurricane Matthew has proved the existence of a protective barrier from the mix of strong crosswinds and cool coastal waters. Matthew fizzled to 75 miles per hour when it lashed South Carolina. It had a devastating speed of 145 miles per hour when it passed by Haiti. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.
News Article | October 25, 2016
A stunning, blue-and-green visualization of the globe allows viewers to see the world's wind and weather patterns as forecast by supercomputers around the world. The striking animation, called "earth," was designed by computer programmer Cameron Beccario, an engineering manager at the computer coding company Indeed Tokyo in Japan. The interactive graphic allows users to toggle between views that show the flow of wind; the movement of ocean currents and the height of waves; the air temperature; and the circulation of air pollutants around the globe. [101 Stunning Images of Earth from Above] The visualization of Earth draws from the huge assortment of data from myriad instruments floating in the planet's atmosphere, drifting along ocean currents and sitting in the soil. For instance, the model uses the National Centers for Environmental Prediction's Global Forecast System, which uses supercomputers to run a model that predicts weather four times a day. That model uses dozens of different types of measurements, from soil moisture to precipitation to ozone concentration. Other data, like sea-surface temperature, are gathered and updated every day from an array of instruments, including infrared satellites, moored buoys and ships just passing through a region. Carbon-dioxide measurements track the concentration of the greenhouse gas at the Earth's surface. The visualization recently provided a view of Hurricane Matthew's destructive path, while another clip from "earth" revealed the location of monster waves over the course of a year. Yet another slice of the data reveals the path of the air pollutant sulfate as it traveled over the Pacific. Particulates of sulfate can block sunlight, thereby cooling the planet. But Beccario hopes people explore the site on their own. "The site is designed to be fairly sparse because I want visitors to discover their own insights. I think this is much more rewarding than being fed a particular narrative," Beccario said. The graphic was initially inspired by the wind map, a visualization of wind flowing across the United States created by artists Fernanda Viégas and Martin Wattenberg. "I thought that would be a great project to emulate and expand upon," Beccario told Live Science. "I've always been interested in science and weather, so this project was a great platform to explore those interests as well. I remembered thinking: What do the winds around Antarctica even look like?! I really wanted to see those patterns."
Gentine P.,Columbia University |
Holtslag A.A.K.,Wageningen University |
D'Andrea F.,Ecole Normale Superieure de Paris |
Ek M.,National Centers for Environmental Prediction
Journal of Hydrometeorology | Year: 2013
The onset of moist convection over land is investigated using a conceptual approach with a slab boundary layer model. The authors determine the essential factors for the onset of boundary layer clouds over land and study their relative importance. They are 1) the ratio of the temperature to the moisture lapse rates of the free troposphere, that is, the inversion Bowen ratio; 2) the mean daily surface temperature; 3) the relative humidity of the free troposphere; and 4) the surface evaporative fraction. Aclear transition is observed between two regimes of moistening of the boundary layer as assessed by the relative humidity at the boundary layer top. In the first so-called wet soil advantage regime, the moistening results from the increase of the mixedlayer specific humidity, which linearly depends on the surface evaporative fraction and inversion Bowen ratio through a dynamic boundary layer factor. In the second so-called dry soil advantage regime, the relative humidity tendency at the boundary layer top is controlled by the thermodynamics and changes in the moist adiabatic induced by the decreased temperature at the boundary layer top and consequent reduction in saturation water vapor pressure. This regime pertains to very deep boundary layers under weakly stratified free troposphere over hot surface conditions. In the context of the conceptual model, a rise in freetropospheric temperature (global warming) increases the occurrence of deep convection and reduces the cloud cover over moist surfaces. This study provides new intuition and predictive capacity on the mechanism controlling the occurrence of moist convection over land. © 2013 American Meteorological Society.
Cucurull L.,National Centers for Environmental Prediction |
Cucurull L.,University Corporation for Atmospheric Research |
Derber J.C.,National Centers for Environmental Prediction |
Journal of Geophysical Research: Atmospheres | Year: 2013
Applications for space-based GPS technology have extended to the atmospheric field during the last two decades. More recently, numerical weather prediction (NWP) centers started incorporating global positioning system (GPS) radio occultation (RO) soundings into their operational assimilation algorithms, resulting in a significant improvement in weather forecasting skill. The main reasons for such benefits are the unbiased nature of the GPS RO measurements, high accuracy and precision, all-weather capability, and equal accuracy over either land or ocean. Profiles of refractivity or bending angle are typically used, owing to the relative simplicity of their forward operators and the computational economy this implies. Although the NOAA National Centers for Environmental Prediction (NCEP) are using refractivities in their operational configuration, a bending angle forward operator has been developed, tested, and implemented and was scheduled to replace the assimilation of refractivities operationally in May, 2012. Results from the NCEP's bending angle method (NBAM) show improvement over the assimilation of refractivities in all atmospheric fields being evaluated. A detailed description and evaluation of NBAM is presented in this article, as well as the advantages this code offers over the assimilation of refractivities and other existing algorithms that assimilate GPS RO bending angles. Key-points implementation of a new forward operator for GPS RO assimilation benefits of the new forward operator detail analysis of the performance of the new forward operator. © 2012 American Geophysical Union. All Rights Reserved.
Ou M.H.,National Centers for Environmental Prediction |
Charles M.,National Centers for Environmental Prediction |
Collins D.C.,National Centers for Environmental Prediction
Weather and Forecasting | Year: 2016
CPC requires the reforecast-calibrated Global Ensemble Forecast System (GEFS) to support the production of their official 6-10- and 8-14-day temperature and precipitation forecasts. While a large sample size of forecast-observation pairs is desirable to generate the necessary model climatology and variances, and covariances to observations, sampling by reforecasts could be done to use available computing resources most efficiently. A series of experiments was done to assess the impact on calibrated forecast skill of using a smaller sample size than the current available reforecast dataset. This study focuses on the skill of week-2 probabilistic forecasts of the 7-day-mean 2-m temperature and accumulated precipitation. The tercile forecasts are expressed as being below-, near-, and above-normal temperature/median precipitation over the continental United States (CONUS). Calibration statistics were calculated using an ensemble regression technique from 25 yr of daily, 11-member GEFS reforecasts for 1986-2010, which were then used to postprocess the GEFS model forecasts for 2011-13. In assessing the skill of calibrated model output using a reforecast dataset with fewer years and ensemble members, and an ensemble run less frequently than daily, it was determined that reductions in the number of ensemble members to six or fewer and reductions in the frequency of reforecast runs from daily to once a week were achievable with minimal loss of skill. However, reducing the number of years of reforecasts to less than 25 resulted in a greater skill degradation. The loss of skill was statistically significant using only 18 yr of reforecasts from 1993 to 2010 to generate model statistics. © 2016 American Meteorological Society.