Zagar N.,University of Ljubljana |
Tribbia J.,U.S. National Center for Atmospheric Research |
Anderson J.L.,U.S. National Center for Atmospheric Research |
Raeder K.,U.S. National Center for Atmospheric Research |
Kleist D.T.,National Centers for Environmental Prediction
Quarterly Journal of the Royal Meteorological Society | Year: 2010
This paper applies the normal-mode functions for the three-dimensional diagnosis of systematic analysis increments in the operational systems of ECMWF and NCEP, in the NCEP/NCAR reanalyses and in the ensemble data assimilation system DART/CAM which is developed at NCAR. Non-zero systematic increments are interpreted as the analysis system bias. The main region of tropospheric biases in all systems is the Tropics; most of the large-scale tropical bias resides in the unbalanced (inertio-gravity) motionwiththeeastward-propagatingcomponentbeingdominant insomedatasets. Themagnitudes of troposphericwind-fieldbiases inJuly 2007 were in the range between 1m s1 and 2 ms1, illustrating the importance of diagnosing analysis systems in the Tropics where magnitudes of the large-scale variability are of the same order. The systematic increments integrated over the whole models' atmosphere appear predominantly balanced; this balance is associated with biases in the zonally averaged balanced state and, in case of ECMWF and NCEP, with the biases at the model top levels. Spectra of analysis increments averaged over one month show that, on average, 20% to 45% of total energy in increment fields belongs to the inertio-gravity motions. © 2010 Royal Meteorological Society.
Spada M.,Barcelona Supercomputing Center |
Jorba O.,Barcelona Supercomputing Center |
Perez Garcia-Pando C.,NASA |
Perez Garcia-Pando C.,Columbia University |
And 3 more authors.
Atmospheric Chemistry and Physics | Year: 2013
One of the major sources of uncertainty in model estimates of the global sea-salt aerosol distribution is the emission parameterization. We evaluate a new sea-salt aerosol life cycle module coupled to the online multiscale chemical transport model NMMB/BSC-CTM. We compare 5 yr global simulations using five state-of-the-art sea-salt open-ocean emission schemes with monthly averaged coarse aerosol optical depth (AOD) from selected AERONET sun photometers, surface concentration measurements from the University of Miami's Ocean Aerosol Network, and measurements from two NOAA/PMEL cruises (AEROINDOEX and ACE1). Model results are highly sensitive to the introduction of sea-surface- temperature (SST)-dependent emissions and to the accounting of spume particles production. Emission ranges from 3888 Tg yr-1 to 8114 Tg yr -1, lifetime varies between 7.3 h and 11.3 h, and the average column mass load is between 5.0 Tg and 7.2 Tg. Coarse AOD is reproduced with an overall correlation of around 0.5 and with normalized biases ranging from +8.8% to +38.8%. Surface concentration is simulated with normalized biases ranging from -9.5% to +28% and the overall correlation is around 0.5. Our results indicate that SST-dependent emission schemes improve the overall model performance in reproducing surface concentrations. On the other hand, they lead to an overestimation of the coarse AOD at tropical latitudes, although it may be affected by uncertainties in the comparison due to the use of all-sky model AOD, the treatment of water uptake, deposition and optical properties in the model and/or an inaccurate size distribution at emission. © 2013 Author(s).
News Article | April 27, 2016
Measurements on board the C-130 aircraft during the NOMADSS field campaign included HONO, pNO , NO , O , BrO, IO, OH radicals, HO radicals, RO radicals, aerosol surface area densities (for particle diameter <1 μm), VOCs, photolysis frequencies, and other meteorology parameters. Extended Data Table 1 summarizes the instrumentation, time resolution, detection limit, accuracy and references31, 32, 33, 34, 35, 36, 37, 38, 39, 40 for our measurements. HONO was measured by two long-path absorption photometric (LPAP) systems based on the Griess–Saltzman reaction31. Briefly, ambient HONO was scrubbed by deionized water in a 10-turn glass coil sampler. The scrubbed nitrite was then derivatized with 5 mM sulfanilamide (SA) and 0.5 mM N-(1-naphthyl)-ethylene-diamine (NED) in 40 mM HCl, to form an azo dye within 5 min. The azo dye was detected by light absorbance at 540 nm using a fibre optic spectrometer (LEDSPEC-4, World Precision Instruments) with a 1-m liquid waveguide capillary flow cell (World Precision Instruments). ‘Zero-HONO’ air was generated by pulling ambient air through a Na CO -coated denuder to remove HONO and was sampled by the systems periodically to establish measurement baselines. Interference from NO , peroxyacetyl nitrate (PAN) and particulate nitrite, if any, was corrected by subtracting the baseline from the ambient air signal. Owing to the low collection efficiency of these interfering species in the sampling coil and their low concentrations, the combined interference signal was estimated to be less than 10% of the total signal in the clean MBL. Potential interference from peroxynitric acid (HO NO ) was suppressed by heating the Teflon perfluoroalkoxy alkanes (PFA) sampling line to 50 °C with a residence time of 0.8 s. The HO NO steady-state concentration in the MBL was estimated to be less than 1 p.p.t.v. at a temperature of 23–26 °C in both flights, and thus interference from HO NO was negligible41. Overall, the instrument baseline in the clean MBL was stable and low, and clear and strong ambient air signals (approximately ten times the detection limit) were observed. The accuracy of HONO measurements was confirmed by comparison with limb-scanning differential optical absorption spectroscopy (DOAS)36. The agreement between these two instruments was very good in wide power plant plumes, where HONO mixing ratios exceeded the lower detection limits of both instruments (Extended Data Fig. 3). HNO and pNO were quantitatively collected with a coil sampler and a frited glass disk sampler, respectively. The collected nitrate in the two channels were reduced to nitrite by two cadmium (Cd) columns, and determined using two LPAP systems31, 32. Zero air was generated to establish measurement baselines: for HNO by passing the ambient air through a NaCl-coated denuder to remove HNO , and for pNO through a Teflon filter and a NaCl-coated denuder to remove aerosol particles and HNO . Potential interference from HONO, NO and PAN was corrected by subtracting the baselines from the ambient air signals. Ozone measurements were unavailable on 8 July 2013 and OH radical measurements were unavailable on 5 July 2013 owing to instrument malfunction. Steady-state OH radical concentrations were calculated and used in the budget analysis when OH radical measurements were not available42. Most of the parameters were observed at similar values during both flights, indicating that both flights captured the primary features of the local chemical environment. The lack of OH and O measurements on the different flights had a negligible impact on our analysis. Several spikes in NO and aerosol surface area density detected from ship exhaust were excluded from the analysis. Seventy-two-hour back-trajectories were calculated for both flights (Extended Data Fig. 1) with the Lagrangian particle dispersion model FLEXPART43, version 9.02 (http://flexpart.eu), using six-hourly meteorological analysis data of the Global Forecasting System of the National Centers for Environmental Prediction (http://nomads.ncep.noaa.gov/txt_descriptions/GFS_half_degree_doc.shtml), interlaced with 3-h forecasts (0:00 utc, 3:00 utc, 6:00 utc, 9:00 utc, 12:00 utc, 15:00 utc, 18:00 utc and 21:00 utc), at a horizontal resolution of 0.5°. Every 5 min during the research flight 10,000 particles were released at the then-current position of the NSF/NCAR C-130 and followed back in time for 72 h. A ‘particle’ here refers to an infinitesimally small parcel of air, which is only affected by three-dimensional transport, turbulence and convection, and does not have any removal processes (no deposition, washout, sedimentation, chemical losses). Centroids shown in the figures are based on an algorithm44 that reduces the residence probability distribution resulting from the locations of the 10,000 particles into five probable locations at each time interval. One aerosol sample was collected using a Teflon filter (Sartorius, pore size 0.45 μm, diameter 47 mm) on-board the NSF/NCAR C-130 aircraft during every research flight from 30 min after takeoff to 30 min before landing. The total sampling volume ranged from 1.1 m3 to 1.5 m3 depending on the flight length (ranging from 6 h to 8 h). The filter sample was wrapped in aluminium foil and stored in a refrigerator until use. The pNO photolysis rate constant was determined using the filter sample in the laboratory. The photochemical experiments were conducted using a cylindrical flow reactor (inner diameter 10 cm, depth 1.5 cm) with a quartz window on the top, at a temperature of 21.0 ± 1.0 °C and a relative humidity of (50 ± 2)%. The filter sample was moved into the flow cell directly from the freezer for the photochemical experiment. Compressed air was purified by flowing through an activated charcoal and Purafil chemisorbant column (Purafil) to remove NO , VOCs, H S, SO , HNO and HONO and was used as carrier gas. Gaseous products, HONO and NO , released during the experiment were flushed out of the reactor by the carrier gas, and were sampled by two coil samplers connected in series. The first 10-turn coil sampler scrubbed HONO with purified water at a collection efficiency of 100% (ref. 31), and the second 32-turn coil sampler was to scrub NO with an acetic acid modified SA/NED solution at a collection efficiency of 60% (ref. 45). The scrubbed nitrite and NO were converted to an azo dye with SA/NED reagents and analysed by two separate LPAP systems31, 45. The filter sample was exposed to the solar simulator radiation for 10 min; baselines were established for both HONO and NO before and after the light exposure. Photochemical production rates of HONO and NO were calculated from their time-integrated signals above the baselines over the period of light exposure. To correct for HONO and NO production from photolysis of HNO deposited on the flow reactor wall surface, a control experiment was conducted by irradiating the empty flow reactor. The control signals were subtracted from the sample exposure signals when calculating the production rates of HONO and NO from pNO photolysis. After 10 min of light exposure, nitrate in the filter sample was extracted with 15 ml 1% NH Cl buffer solution (pH = 8.5), reduced to nitrite by a Cd column and determined by LPAP. A 300-W Ceramic xenon arc lamp (Perkin Elmer, model PE300BUV) was used as a light source. The ultraviolet light below 290 nm and the heat-generating infrared light were removed by a Pyrex glass filter and a water filter, respectively. The parabolic light beam irradiated only a circular area of a 1-inch (2.54 cm) diameter in the centre of the flow reactor. The shape of the spectrum of the filtered light source is similar to that of the solar actinic spectrum in the MBL (Extended Data Fig. 4). The effective light intensity in the centre of the flow reactor under direct irradiation was measured to be about 3.5 times higher than that at tropical noon on the ground (solar elevation angle θ = 0°)1, 18, using a nitrate actinometer18. The production rates of HONO and NO were normalized by the amount of pNO exposed and the effective ultraviolet light intensity, to obtain the normalized photolysis rate constants of pNO , (ref. 5). where P and are the production rates of HONO and NO , respectively; N is the light-exposed pNO amount determined in the extraction solution; J is the photolysis rate constant of nitrate in the actinometer solution exposed to the experimental light source; and the ‘standard’ photolysis rate constant of aqueous nitrate (J ≈ 3.0 × 10−7 s−1) is assumed for typical tropical summer conditions on the ground (solar elevation angle θ = 0°)1, 18, corresponding to a gas-phase HNO photolysis rate constant ( ) of ~7.0 × 10−7 s−1. HONO is the major product from pNO photolysis, with an average production ratio of HONO to NO of 2.0. It should be noted that the value is calculated from the production rates of HONO and NO , and thus it may underestimate the photolysis rate constant of pNO if other products, such as NO, are produced. However, NO was only a minor secondary product from the photolysis of HONO and NO , and accounted for ~1% of the total production of HONO and NO in the flow reactor in a resident time of 8 s. Therefore, lack of NO measurement in this study would not affect the measurement. The overall uncertainty in the photolysis rate constant measurement is about 50%, taking into account the measurement uncertainties in production rates of HONO and NO , nitrate loading and effective ultraviolet light intensity. It should be pointed out that the laboratory-determined represents merely an average photolysis rate constant of pNO collected during the entire flight covering a wide geographic area, and thus does not reflect the possible temporal and spatial variability in the actual during the flight. In addition, the photochemical reactivity of bulk aerosol sample collected on the filter may be altered during sampling and handling, and thus the laboratory-determined using the bulk aerosol sample might be different from that under the ambient conditions. Nevertheless, the laboratory-determined is still the first and best estimate of the pNO photolysis rate constant in the ambient atmosphere so far, and is in reasonable agreement with the values inferred from field observations (Fig. 2). To evaluate the global recycling NO source from particulate nitrate photolysis, more aerosol samples need to be collected from different atmospheric environments all over the world, and better-designed experiments need to be conducted using these samples to more accurately determine the photolysis rates constants of particulate nitrate under conditions similar to those of ambient atmosphere. The air masses encountered consisted mostly of aged air masses circulating within the boundary layer of the North Atlantic Ocean under a Bermuda high-pressure system for several days before reaching the measurement locations (Extended Data Fig. 1). Within this relatively isolated air mass in the MBL, the cycling of pNO –HONO–NO was occurring much more rapidly, of the order of hours during the day, than dry deposition, which was of the order of days. Therefore, the diurnal chemistry of pNO –HONO–NO in the MBL can be simulated by a zero-dimensional box model. Model simulations were conducted using the MCM v3.2 updated with the Jet Propulsion Laboratory’s latest recommended kinetics10, 11, 46, and constrained by the field-measured long-lived species, such as O , VOCs and total NO . The model was initiated from 00:00 local time and was allowed to run for three diurnal cycles (72 h). The diurnal concentration profiles of short-lived species, including OH, HO , HONO and NO , were affected by their initial concentrations only during the first diurnal cycle of simulation. The results presented here (Fig. 3, Extended Data Figs 2 and 5) were from the second diurnal cycle of the simulation run. To simulate the time-varying cycling of pNO –HONO–NO , the diurnal profiles of photolysis frequencies were also calculated by the MCM v3.2 and scaled by the measured values in the early afternoon. The time-dependent photolysis rate constant of particulate nitrate ( ) at certain times of the day was calculated as follows: where is the photolysis rate constant of gas-phase HNO measured in the MBL. The ratio of pNO to HNO was set to 1, as observed in the MBL during the early afternoon. The photolysis of pNO is considered a source of both HONO and NO . To include all the possible HONO sources in the model, the following reactions are also considered: the gas-phase reactions of OH with NO, of excited NO with H O (refs 26 and 27), and of HO ·H O with NO (refs 16 and 28), and the heterogeneous reactions of NO on sea-salt aerosol particles. Upper-limit HONO formation rates from reactions of excited NO with H O and of HO ·H O with NO are calculated using the latest recommendations27, 28. An upper-limit uptake coefficient of 10−4 was assumed for NO uptake on sea-salt aerosol47. HONO sinks in the model include its photolysis and the gas-phase reaction with OH radicals. The photolysis of pNO is a NO source mainly through HONO as an intermediate. Other NO sources, such as the photolysis of gaseous HNO and the reaction of gaseous HNO with OH, are negligible, and are not included here. NO sinks include the formation of bromine nitrate (BrONO ), iodine nitrate (IONO ) and organic nitrate (RONO ) as well as HNO through the reactions of NO with OH and of NO with HO . The uptake of halogen nitrate and organic nitrates on the sea-salt aerosols and following hydrolysis provide effective pathways for conversion of NO to particulate nitrate in addition to HNO uptake. The photolysis lifetime of BrONO in the gas phase was estimated to be ~12 min in the MBL, somewhat longer than the uptake and hydrolysis lifetime of ~6 min using an uptake coefficient of 0.8 on sea-salt aerosols46. Therefore, two-thirds of BrONO is converted to pNO through hydrolysis on the aerosol particle at midday. The photolysis lifetime of IONO in the gas phase was estimated to be ~5 min (ref. 48), comparable to its uptake and hydrolysis lifetime46, 49. Therefore, half of IONO is converted to pNO via hydrolysis on the aerosol particles at midday. BrO was measured at 1.5 p.p.t.v. by the DOAS instrument. The IO level was below the detection limit and was assumed to be at the typical MBL concentration49, 50 of 0.5 p.p.t.v. To evaluate the impact of VOCs on reactive nitrogen chemistry, explicit methane chemistry is considered in the model. Higher VOCs are lumped into a single species, RH. The concentration of RH is adjusted by the sum of the abundance of individually measured VOCs (R H) scaled by the ratio of their rate coefficients for OH+R H to that for OH+CH . The products of OH+RH reactions lead to the formation of RO radicals, carbonyls, peroxides, and organic nitrates. The yield of organic nitrate from the reaction of RO radical with NO (reaction 3) is calculated based on the reaction rate of CH O radical with NO ( ) and the effective branching ratio β. where [R O ] and [CH O ] are the concentrations of various R O radicals formed from oxidation of various VOCs and methane, respectively; k and β are the recommended rate constants and branching ratios (β ) for various RO radicals, respectively30, 51. The effective branching ratio β is estimated to be ~7% for our model. The fate of RONO in the MBL is not well known; hydrolysis has been assumed to be an effective sink of organic nitrates in forested regions30. In our model, an uptake coefficient of 10−2 and a 100% nitrate yield from RONO uptake is assumed for organic nitrates on sea-salt aerosol. The model is initialized with typically measured parameters and is run at 15-min time steps for three diurnal cycles. Multiple runs are used to constrain the model uncertainty by error propagation from the errors of all considered parameters. Gaussian error propagation is applied for the uncertainty calculation. Overall, a model uncertainty of 28% (±1 s.d.) is calculated. The reactive nitrogen species are found to be reproducible for multiple days without considering deposition and transport. Sensitivity studies with the model indicate that the calculated HONO concentrations are relatively insensitive (≤2%) to the following assumed parameters: the RONO branching ratio (varying from 1% to 7% to 14%), the uptake coefficients of RONO and XONO (changing up and down by a factor of two), the nitrate yield from RONO hydrolysis (varying from 5% to 100%), the initial concentration of non-methane hydrocarbons (changing up and down by a factor of two), and the RO +NO reaction rate constant (changing up and down by a factor of two). Changing up and down by a factor of two in the initial concentrations of BrO or IO leads to a change of up to 4% in the calculated HONO. Finally, the sensitivity of the model output to the ratio of HNO to pNO was assessed by varying between 0.5 and 2.0, and the modelled HONO varied between 1.17 and 0.74 of the value for the ratio of unity. Despite the simplicity of this model approach, the good agreement between model calculations and field observations for key species (Fig. 3 and Extended Data Fig. 5) shows its feasibility and usefulness for assessing the chemistry of radicals and reactive nitrogen in the MBL.
Janjic Z.,National Centers for Environmental Prediction |
Janjic T.,Alfred Wegener Institute for Polar and Marine Research |
Vasic R.,University Corporation for Atmospheric Research
Monthly Weather Review | Year: 2011
Starting from three Eulerian second-order nonlinear advection schemes for semi-staggered Arakawa grids B/E, advection schemes of fourth order of formal accuracy were developed. All three second-order advection schemes control the nonlinear energy cascade in case of nondivergentflow by conserving quadratic quantities. Linearization of all three schemes leads to the same second-order linear advection scheme. The second-order term of the truncation error of the linear advection scheme has a special form so that it can be eliminated by modifying the advected quantity while still preserving consistency. Tests with linear advection of a cone confirm the advantage of the fourth-order scheme. However, if a localized, large amplitude and high wave-number pattern is present in initial conditions, the clear advantage of the fourth-order scheme disappears. The new nonlinear fourth-order schemes are quadratic conservative and reduce to the Arakawa Jacobian for advected quantities in case of nondivergent flow. In case of general flow the conservation properties of the new momentum advection schemes impose stricter constraint on the nonlinear cascade than the original second-order schemes. However, for nondivergent flow, the conservation properties of the fourth-order schemes cannot be proven in the same way as those of the original second-order schemes. Therefore, demanding long-term and low-resolution nonlinear tests were carried out in order to investigate how well the fourth-order schemes control the nonlinear energy cascade. All schemes were able to maintain meaningful solutions throughout the test. Finally, the impact was examined of the fourth-order momentum advection on global medium-range forecasts. The 500-hPa anomaly correlation coefficient obtained using the best performing fourth-order scheme did not show an improvement compared to the tests using its second-order counterpart. © 2011 American Meteorological Society.
News Article | January 6, 2017
Active Atlantic hurricane periods may not produce intense hurricanes that threaten the U.S. East Coast. This phenomenon owes it to the protective shield formed by cooler ocean temperatures and vertical wind shear at the East Coast of United States, which helps in weakening the intensity of incoming hurricanes. This was revealed in a new research published in Nature. The research was conducted at the University of Wisconsin-Madison by James Kossin, a federal atmospheric research scientist at the UW. He studied the relative intensity of hurricanes in the low and high Atlantic hurricane periods by factoring in many unstudied elements. In the research, Kossin pursued a novel approach, departing from the traditional focus on tropical Atlantic as the chief hurricane development region by pulling in hurricane-producing conditions outside that region. Cooler sea surface temperatures work in conjunction with faster wind shears to produce hurricane seasons. That is why the intensity of hurricanes is ascertained by tracking ocean temperatures and wind shear. Kossin's research extrapolated the incongruity of hurricanes formed in the Atlantic basin getting weakened upon landfall. To explain this anomaly, the scientist explored other patterns and dug up past data to establish related patterns. He studied the data spanning from 1947 to 2015, which gave information on location, central pressure, and maximum winds. Also, the data from the National Centers for Environmental Prediction and the National Center for Atmospheric Research had a benchmark of sea temperatures and wind shear of the sample period. Kossin inferred that even as tropics produce hurricanes when they hit the U.S. coasts, an erosion of energy takes place after being stonewalled by higher wind shear and a cooler ocean temperature. "They have to track through a gauntlet of high shear to reach the coast and many of them stop intensifying," said Kossin. In fact, he calls it a natural mechanism that kills hurricanes threatening the U.S. coast. This also implies that during quieter periods in the Atlantic basin, lower wind shears will take away that protective barrier at the coast. That is why Kossin is cautioning that even if fewer hurricanes may be lashing the coast during that low period, they can be stronger and come under category 3 to category 5 range. The study has been able to explain why there has been no major hurricane attack in the United States for more than 11 years. Kossin also mentioned that Hurricane Matthew has proved the existence of a protective barrier from the mix of strong crosswinds and cool coastal waters. Matthew fizzled to 75 miles per hour when it lashed South Carolina. It had a devastating speed of 145 miles per hour when it passed by Haiti. © 2017 Tech Times, All rights reserved. Do not reproduce without permission.
Barlage M.,U.S. National Center for Atmospheric Research |
Chen F.,U.S. National Center for Atmospheric Research |
Tewari M.,U.S. National Center for Atmospheric Research |
Ikeda K.,U.S. National Center for Atmospheric Research |
And 6 more authors.
Journal of Geophysical Research: Atmospheres | Year: 2010
Simulated snowpack by the Noah land surface model (LSM) shows an early depletion due to excessive sublimation and too early onset of snowmelt. To mitigate these deficiencies, five model modifications are tested to improve snowpack performance: (1) time-varying snow albedo, (2) solar radiation adjustment for terrain slope and orientation, (3) reducing the surface exchange coefficient for stable boundary layers, (4) increase of fresh snow albedo, and (5) adjusting surface roughness length when snow is present. The Noah LSM is executed from 1 November 2007 to 1 August 2008 for the headwater region in the Colorado Rocky Mountains with complex terrain, and its results are evaluated against 1 km Snow Data Assimilation System (SNODAS) output and individual Natural Resources Conservation Service Snowpack Telemetry (SNOTEL) sites. The most effective way to improve magnitude and timing of seasonal maximum snow water equivalent (SWE) is the introduction of the time-varying albedo formulation and the increase in fresh snow albedo. Minor improvement is obtained by reducing nighttime sublimation through adjusting the stable boundary layer surface exchange coefficient. Modifying the surface roughness length over snow surfaces and adding a terrain slope and orientation adjustment for radiation has little effect on average SWE conditions for the entire modeling domain, though it can have a significant effect in certain regions. The net effect of all changes is to improve the magnitude and timing of seasonal maximum SWE, but the snow period end is now somewhat too long. Adding the terrain slope and orientation effects does have an effect on local surface energy flux components depending on the cell slope and orientation. Copyright 2010 by the American Geophysical Union.
News Article | October 25, 2016
A stunning, blue-and-green visualization of the globe allows viewers to see the world's wind and weather patterns as forecast by supercomputers around the world. The striking animation, called "earth," was designed by computer programmer Cameron Beccario, an engineering manager at the computer coding company Indeed Tokyo in Japan. The interactive graphic allows users to toggle between views that show the flow of wind; the movement of ocean currents and the height of waves; the air temperature; and the circulation of air pollutants around the globe. [101 Stunning Images of Earth from Above] The visualization of Earth draws from the huge assortment of data from myriad instruments floating in the planet's atmosphere, drifting along ocean currents and sitting in the soil. For instance, the model uses the National Centers for Environmental Prediction's Global Forecast System, which uses supercomputers to run a model that predicts weather four times a day. That model uses dozens of different types of measurements, from soil moisture to precipitation to ozone concentration. Other data, like sea-surface temperature, are gathered and updated every day from an array of instruments, including infrared satellites, moored buoys and ships just passing through a region. Carbon-dioxide measurements track the concentration of the greenhouse gas at the Earth's surface. The visualization recently provided a view of Hurricane Matthew's destructive path, while another clip from "earth" revealed the location of monster waves over the course of a year. Yet another slice of the data reveals the path of the air pollutant sulfate as it traveled over the Pacific. Particulates of sulfate can block sunlight, thereby cooling the planet. But Beccario hopes people explore the site on their own. "The site is designed to be fairly sparse because I want visitors to discover their own insights. I think this is much more rewarding than being fed a particular narrative," Beccario said. The graphic was initially inspired by the wind map, a visualization of wind flowing across the United States created by artists Fernanda Viégas and Martin Wattenberg. "I thought that would be a great project to emulate and expand upon," Beccario told Live Science. "I've always been interested in science and weather, so this project was a great platform to explore those interests as well. I remembered thinking: What do the winds around Antarctica even look like?! I really wanted to see those patterns."
Gentine P.,Columbia University |
Holtslag A.A.K.,Wageningen University |
D'Andrea F.,Ecole Normale Superieure de Paris |
Ek M.,National Centers for Environmental Prediction
Journal of Hydrometeorology | Year: 2013
The onset of moist convection over land is investigated using a conceptual approach with a slab boundary layer model. The authors determine the essential factors for the onset of boundary layer clouds over land and study their relative importance. They are 1) the ratio of the temperature to the moisture lapse rates of the free troposphere, that is, the inversion Bowen ratio; 2) the mean daily surface temperature; 3) the relative humidity of the free troposphere; and 4) the surface evaporative fraction. Aclear transition is observed between two regimes of moistening of the boundary layer as assessed by the relative humidity at the boundary layer top. In the first so-called wet soil advantage regime, the moistening results from the increase of the mixedlayer specific humidity, which linearly depends on the surface evaporative fraction and inversion Bowen ratio through a dynamic boundary layer factor. In the second so-called dry soil advantage regime, the relative humidity tendency at the boundary layer top is controlled by the thermodynamics and changes in the moist adiabatic induced by the decreased temperature at the boundary layer top and consequent reduction in saturation water vapor pressure. This regime pertains to very deep boundary layers under weakly stratified free troposphere over hot surface conditions. In the context of the conceptual model, a rise in freetropospheric temperature (global warming) increases the occurrence of deep convection and reduces the cloud cover over moist surfaces. This study provides new intuition and predictive capacity on the mechanism controlling the occurrence of moist convection over land. © 2013 American Meteorological Society.
Cucurull L.,National Centers for Environmental Prediction |
Cucurull L.,University Corporation for Atmospheric Research |
Derber J.C.,National Centers for Environmental Prediction |
Journal of Geophysical Research: Atmospheres | Year: 2013
Applications for space-based GPS technology have extended to the atmospheric field during the last two decades. More recently, numerical weather prediction (NWP) centers started incorporating global positioning system (GPS) radio occultation (RO) soundings into their operational assimilation algorithms, resulting in a significant improvement in weather forecasting skill. The main reasons for such benefits are the unbiased nature of the GPS RO measurements, high accuracy and precision, all-weather capability, and equal accuracy over either land or ocean. Profiles of refractivity or bending angle are typically used, owing to the relative simplicity of their forward operators and the computational economy this implies. Although the NOAA National Centers for Environmental Prediction (NCEP) are using refractivities in their operational configuration, a bending angle forward operator has been developed, tested, and implemented and was scheduled to replace the assimilation of refractivities operationally in May, 2012. Results from the NCEP's bending angle method (NBAM) show improvement over the assimilation of refractivities in all atmospheric fields being evaluated. A detailed description and evaluation of NBAM is presented in this article, as well as the advantages this code offers over the assimilation of refractivities and other existing algorithms that assimilate GPS RO bending angles. Key-points implementation of a new forward operator for GPS RO assimilation benefits of the new forward operator detail analysis of the performance of the new forward operator. © 2012 American Geophysical Union. All Rights Reserved.
Ou M.H.,National Centers for Environmental Prediction |
Charles M.,National Centers for Environmental Prediction |
Collins D.C.,National Centers for Environmental Prediction
Weather and Forecasting | Year: 2016
CPC requires the reforecast-calibrated Global Ensemble Forecast System (GEFS) to support the production of their official 6-10- and 8-14-day temperature and precipitation forecasts. While a large sample size of forecast-observation pairs is desirable to generate the necessary model climatology and variances, and covariances to observations, sampling by reforecasts could be done to use available computing resources most efficiently. A series of experiments was done to assess the impact on calibrated forecast skill of using a smaller sample size than the current available reforecast dataset. This study focuses on the skill of week-2 probabilistic forecasts of the 7-day-mean 2-m temperature and accumulated precipitation. The tercile forecasts are expressed as being below-, near-, and above-normal temperature/median precipitation over the continental United States (CONUS). Calibration statistics were calculated using an ensemble regression technique from 25 yr of daily, 11-member GEFS reforecasts for 1986-2010, which were then used to postprocess the GEFS model forecasts for 2011-13. In assessing the skill of calibrated model output using a reforecast dataset with fewer years and ensemble members, and an ensemble run less frequently than daily, it was determined that reductions in the number of ensemble members to six or fewer and reductions in the frequency of reforecast runs from daily to once a week were achievable with minimal loss of skill. However, reducing the number of years of reforecasts to less than 25 resulted in a greater skill degradation. The loss of skill was statistically significant using only 18 yr of reforecasts from 1993 to 2010 to generate model statistics. © 2016 American Meteorological Society.