Washington, DC, United States
Washington, DC, United States

The National Aeronautics and Space Administration is the United States government agency responsible for the civilian space program as well as aeronautics and aerospace research.President Dwight D. Eisenhower established the National Aeronautics and Space Administration in 1958 with a distinctly civilian orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA's predecessor, the National Advisory Committee for Aeronautics . The new agency became operational on October 1, 1958.Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle, the Space Launch System and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program which provides oversight of launch operations and countdown management for unmanned NASA launches.NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories and associated programs. NASA shares data with various national and international organizations such as from the Greenhouse Gases Observing Satellite. Wikipedia.

SEARCH FILTERS
Time filter
Source Type

Patent
University Corporation For Atmospheric Research, Montana State University and Nasa | Date: 2016-04-27

A shared optics and telescope, a filter, and a micropulse differential absorption LIDAR are provided, with methods to use the same. The shared optics and telescope includes a pair of axicon lenses, a secondary mirror, a primary mirror including an inner mirror portion and an outer mirror portion, the inner mirror portion operable to expand the deflected annular transmission beam, and the outer mirror portion operable to collect the return signal. The filter includes an etalon and a first filter. The micropulse differential absorption LIDAR includes first and second laser signals, a laser transmission beam selection switch, a first laser return signal switch, and a toggle timer.


A system deposits a film on a substrate while determining mechanical stress experienced by the film. A substrate is provided in a deposition chamber. A support disposed in the chamber supports a circular portion of the substrate with a first surface of the substrate facing a deposition source and a second surface being reflective. An optical displacement sensor is positioned in the deposition chamber in a spaced-apart relationship with respect to a portion of the substrates second surface located at approximately the center of the circular portion of the substrate. When the deposition source deposits a film on the first surface, a displacement of the substrate is measured using the optical displacement sensor. A processor is programmed to use the substrate displacement to determine a radius of curvature of the substrate, and to use the radius of curvature to determine mechanical stress experienced by the film during deposition.


Durante M.,Helmholtz Center for Heavy Ion Research | Cucinotta F.A.,NASA
Reviews of Modern Physics | Year: 2011

The health risks of space radiation are arguably the most serious challenge to space exploration, possibly preventing these missions due to safety concerns or increasing their costs to amounts beyond what would be acceptable. Radiation in space is substantially different from Earth: high-energy (E) and charge (Z) particles (HZE) provide the main contribution to the equivalent dose in deep space, whereas γ rays and low-energy α particles are major contributors on Earth. This difference causes a high uncertainty on the estimated radiation health risk (including cancer and noncancer effects), and makes protection extremely difficult. In fact, shielding is very difficult in space: the very high energy of the cosmic rays and the severe mass constraints in spaceflight represent a serious hindrance to effective shielding. Here the physical basis of space radiation protection is described, including the most recent achievements in space radiation transport codes and shielding approaches. Although deterministic and Monte Carlo transport codes can now describe well the interaction of cosmic rays with matter, more accurate double-differential nuclear cross sections are needed to improve the codes. Energy deposition in biological molecules and related effects should also be developed to achieve accurate risk models for long-term exploratory missions. Passive shielding can be effective for solar particle events; however, it is limited for galactic cosmic rays (GCR). Active shielding would have to overcome challenging technical hurdles to protect against GCR. Thus, improved risk assessment and genetic and biomedical approaches are a more likely solution to GCR radiation protection issues. © 2011 American Physical Society.


Ostrikov K.,CSIRO | Neyts E.C.,University of Antwerp | Meyyappan M.,NASA
Advances in Physics | Year: 2013

The unique plasma-specific features and physical phenomena in the organization of nanoscale soild-state systems in a broad range of elemental composition, structure, and dimensionality are critically reviewed. These effects lead to the possibility to localize and control energy and matter at nanoscales and to produce self-organized nano-solids with highly unusual and superior properties. A unifying conceptual framework based on the control of production, transport, and self-organization of precursor species is introduced and a variety of plasma-specific non-equilibrium and kinetics-driven phenomena across the many temporal and spatial scales is explained. When the plasma is localized to micrometer and nanometer dimensions, new emergent phenomena arise. The examples range from semiconducting quantum dots and nanowires, chirality control of single-walled carbon nanotubes, ultra-fine manipulation of graphenes, nano-diamond, and organic matter to nano-plasma effects and nano-plasmas of different states of matter. © 2013 Taylor and Francis Group, LLC.


Morton D.C.,NASA
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013

Recent drought events underscore the vulnerability of Amazon forests to understorey fires. The long-term impact of fires on biodiversity and forest carbon stocks depends on the frequency of fire damages and deforestation rates of burned forests. Here, we characterized the spatial and temporal dynamics of understorey fires (1999-2010) and deforestation (2001-2010) in southern Amazonia using new satellite-based estimates of annual fire activity (greater than 50 ha) and deforestation (greater than 10 ha). Understorey forest fires burned more than 85 500 km(2) between 1999 and 2010 (2.8% of all forests). Forests that burned more than once accounted for 16 per cent of all understorey fires. Repeated fire activity was concentrated in Mato Grosso and eastern Pará, whereas single fires were widespread across the arc of deforestation. Routine fire activity in Mato Grosso coincided with annual periods of low night-time relative humidity, suggesting a strong climate control on both single and repeated fires. Understorey fires occurred in regions with active deforestation, yet the interannual variability of fire and deforestation were uncorrelated, and only 2.6 per cent of forests that burned between 1999 and 2008 were deforested for agricultural use by 2010. Evidence from the past decade suggests that future projections of frontier landscapes in Amazonia should separately consider economic drivers to project future deforestation and climate to project fire risk.


Mumma M.J.,NASA | Charnley S.B.,NASA
Annual Review of Astronomy and Astrophysics | Year: 2011

Final publication forthcoming, pending release of embargo by Nature. Cometary nuclei contain the least modified material from the formative epoch of our planetary system, and their compositions reflect a range of processes experienced by material prior to its incorporation in the cometary nucleus. Dynamical models suggest that icy bodies in the main cometary reservoirs (Kuiper Belt, Oort Cloud) formed in a range of environments in the protoplanetary disk, and (for the Oort Cloud) even in disks surrounding neighboring stars of the Sun's birth cluster. Photometric and spectroscopic surveys of more than 100 comets have enabled taxonomic groupings based on free radical species and on crystallinity of rocky grains. Since 1985, new surveys have provided emerging taxonomies based on the abundance ratios of primary volatiles. More than 20 primary chemical species are now detected in bright comets. Measurements of nuclear spin ratios (in water, ammonia, and methane) and of isotopic ratios (D/H in water and HCN; 14N/15N in CN and HCN) have provided critical insights on factors affecting formation of the primary species. The identification of an abundant product species (HNC) has provided clear evidence of chemical production in the inner coma. Parallel advances have occurred in astrochemistry of hot corinos, circumstellar disks, and dense cloud cores. In this review, we address the current state of cometary taxonomy and compare it with current astrochemical insights. © 2011 by Annual Reviews. All rights reserved.


Stecker F.W.,NASA
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013

I show that the high energy neutrino flux predicted to arise from active galactic nuclei cores can explain the PeV neutrinos detected by IceCube without conflicting with the constraints from the observed extragalactic cosmic-ray and γ-ray backgrounds. © 2013 Published by the American Physical Society.


Experimental studies of the partitioning of siderophile elements between metallic and silicate liquids have provided fundamental constraints on the early history and differentiation conditions of the Earth. With many new studies even in the last 20. yr, several models have emerged from the results, including low pressure equilibrium, high pressure equilibrium, and combined high and low pressure multi-stage models. The reasons - silicate melt composition, pressure effects on silicate melt structure, different methods for calculating metal activity coefficients, and the role of deep mantle phases - for the multitude of resulting models have not been specifically addressed before, yet are critical in evaluating the more likely and realistic models. The four reasons leading to the divergence of results will be discussed and evaluated. The behavior of the moderately siderophile elements Ni and Co will be compared using several approaches, each of which results in the same conclusion for Ni and Co. This consistency will eliminate the supposition that one or the other approaches gives a more accurate answer for element partitioning. Newly updated expressions for 11 elements are then derived and presented and applied to the early Earth to evaluate the idea of a late stage equilibration between a core forming metal and silicate melt (or magma ocean). It is possible to explain all 11 elements at conditions of 27-33. GPa, 3300-3600. K,ΔIW =-. 1, for peridotite and a metallic liquid containing 10% of a light element. The main difference between the current result and several other recent modeling efforts is that Mn, V, and Cr are hosted in deep mantle phases as well as the core. The other elements - Ni, Co, Mo, W, P, Cu, Ga, and Pd - are hosted in core, and detailed modeling here shows the importance of accounting for oxygen fugacity, silicate and metallic liquid compositions, as well as temperature and pressure. The idea of late stage metal-silicate equilibrium at a restricted pressure and temperature range leaving a chemical finger print on the upper mantle remains viable. © 2011.


Shindell D.T.,NASA
Nature Climate Change | Year: 2014

Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO 2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO 2 in present models and assessments (<1.3 °C) is very unlikely. © 2014 Macmillan Publishers Limited. All rights reserved.


Comiso J.C.,NASA
Journal of Climate | Year: 2012

The perennial ice area was drastically reduced to 38% of its climatological average in 2007 but recovered slightly in 2008, 2009, and 2010 with the areas being 10%, 24%, and 11% higher than in 2007, respectively. However, trends in extent and area remained strongly negative at -12.2% and -13.5% decade -1, respectively. The thick component of the perennial ice, called multiyear ice, as detected by satellite data during the winters of 1979-2011 was studied, and results reveal that the multiyear ice extent and area are declining at an even more rapid rate of -15.1% and -17.2% decade -1, respectively, with a record low value in 2008 followed by higher values in 2009, 2010, and 2011. Such a high rate in the decline of the thick component of the Arctic ice cover means a reduction in the average ice thickness and an even more vulnerable perennial ice cover. The decline of the multiyear ice area from 2007 to 2008 was not as strong as that of the perennial ice area from 2006 to 2007, suggesting a strong role of second-year ice melt in the latter. The sea ice cover is shown to be strongly correlated with surface temperature, which is increasing at about 3 times the global average in the Arctic but appears weakly correlated with the Arctic Oscillation (AO), which controls the atmospheric circulation in the region. An 8-9-yr cycle is apparent in the multiyear ice record, which could explain, in part, the slight recovery in the last 3 yr.

Loading NASA collaborators
Loading NASA collaborators