The National Aeronautics and Space Administration is the United States government agency responsible for the civilian space program as well as aeronautics and aerospace research.President Dwight D. Eisenhower established the National Aeronautics and Space Administration in 1958 with a distinctly civilian orientation encouraging peaceful applications in space science. The National Aeronautics and Space Act was passed on July 29, 1958, disestablishing NASA's predecessor, the National Advisory Committee for Aeronautics . The new agency became operational on October 1, 1958.Since that time, most U.S. space exploration efforts have been led by NASA, including the Apollo moon-landing missions, the Skylab space station, and later the Space Shuttle. Currently, NASA is supporting the International Space Station and is overseeing the development of the Orion Multi-Purpose Crew Vehicle, the Space Launch System and Commercial Crew vehicles. The agency is also responsible for the Launch Services Program which provides oversight of launch operations and countdown management for unmanned NASA launches.NASA science is focused on better understanding Earth through the Earth Observing System, advancing heliophysics through the efforts of the Science Mission Directorate's Heliophysics Research Program, exploring bodies throughout the Solar System with advanced robotic missions such as New Horizons, and researching astrophysics topics, such as the Big Bang, through the Great Observatories and associated programs. NASA shares data with various national and international organizations such as from the Greenhouse Gases Observing Satellite. Wikipedia.
Durante M.,Helmholtz Center for Heavy Ion Research |
Reviews of Modern Physics | Year: 2011
The health risks of space radiation are arguably the most serious challenge to space exploration, possibly preventing these missions due to safety concerns or increasing their costs to amounts beyond what would be acceptable. Radiation in space is substantially different from Earth: high-energy (E) and charge (Z) particles (HZE) provide the main contribution to the equivalent dose in deep space, whereas γ rays and low-energy α particles are major contributors on Earth. This difference causes a high uncertainty on the estimated radiation health risk (including cancer and noncancer effects), and makes protection extremely difficult. In fact, shielding is very difficult in space: the very high energy of the cosmic rays and the severe mass constraints in spaceflight represent a serious hindrance to effective shielding. Here the physical basis of space radiation protection is described, including the most recent achievements in space radiation transport codes and shielding approaches. Although deterministic and Monte Carlo transport codes can now describe well the interaction of cosmic rays with matter, more accurate double-differential nuclear cross sections are needed to improve the codes. Energy deposition in biological molecules and related effects should also be developed to achieve accurate risk models for long-term exploratory missions. Passive shielding can be effective for solar particle events; however, it is limited for galactic cosmic rays (GCR). Active shielding would have to overcome challenging technical hurdles to protect against GCR. Thus, improved risk assessment and genetic and biomedical approaches are a more likely solution to GCR radiation protection issues. © 2011 American Physical Society.
Agency: GTR | Branch: EPSRC | Program: | Phase: Fellowship | Award Amount: 946.06K | Year: 2015
Policy makers and regulatory bodies are demanding the aerospace industry reduces CO2 emission by 50% and NOx emission by 80% by 2020. In order to meet these drastic demands and ensure affordable air travel in the future, it is essential to make lighter aircraft which will use minimum fuel. The aerospace research community recognises the need to make a dramatic performance improvement and is considering several new aircraft concepts that move away from the conventional two-wing-one-fuselage configuration. This brings new challenges to aircraft design. A wing is a highly complex structure to design as it needs to consider the complex interaction between aerodynamics and structural behaviour. The current design practice is therefore very much based on using the previous successful design data. The challenge of departing from the conventional aircraft is that there are limited successful historical design data that is applicable to new concept aircraft. Once we have a wing design, however, there are sophisticated computational methods that analyse how the wing behaves under external flight conditions. In fact, there has been a significant level of development in computational analysis methods taking advantage of growing computational power. A prime example of this is the recent development in the computational modelling of materials. Using this technology, new advanced materials can be created in half the time that traditional material development takes and the return on investment in computational materials research has been estimated at between 300 - 900%. This fellowship is at the heart of developing sophisticated computational methods to design aircraft configurations that have not been considered before. The majority of the current methods analyse how a given material or structure responds to the external environment such as in flight at speed Mach 0.8, 38000 ft. What is different about the methods in this research is that they are inverse of the analysis methods: They will determine the best combination of advanced material and structural configuration based on the external environment and hence design the optimum wing for the given flight conditions. My research approach is to represent the design problem as a set of mathematical functions and develop computational methods to find the optimum solution. The methods will therefore, find the optimum design for both materials and structural configuration at the same time. The outcome of this fellowship will provide engineers with a sophisticated tool to design complex aircraft structures. The tools will be developed and disseminated in a way that they can be used on a range of other complex engineering problems. The UK has 17% of the global aerospace market share with revenue of £24 billion and is responsible for 3.6% national employment. With the international civil aerospace market forecast to grow to $4 trillion by 2030, the UK market has the opportunity to grow to $352 billion by 2030. It is critical that the UK develops this unique capability to ensure we maintain the market share of these high value products and processes and its economy has the opportunity for growth. Furthermore, the weight savings which will be made from optimum use of materials lead to meeting the emission targets, thus ensuring sustainable environment for the future generations.
Ostrikov K.,CSIRO |
Neyts E.C.,University of Antwerp |
Advances in Physics | Year: 2013
The unique plasma-specific features and physical phenomena in the organization of nanoscale soild-state systems in a broad range of elemental composition, structure, and dimensionality are critically reviewed. These effects lead to the possibility to localize and control energy and matter at nanoscales and to produce self-organized nano-solids with highly unusual and superior properties. A unifying conceptual framework based on the control of production, transport, and self-organization of precursor species is introduced and a variety of plasma-specific non-equilibrium and kinetics-driven phenomena across the many temporal and spatial scales is explained. When the plasma is localized to micrometer and nanometer dimensions, new emergent phenomena arise. The examples range from semiconducting quantum dots and nanowires, chirality control of single-walled carbon nanotubes, ultra-fine manipulation of graphenes, nano-diamond, and organic matter to nano-plasma effects and nano-plasmas of different states of matter. © 2013 Taylor and Francis Group, LLC.
Philosophical transactions of the Royal Society of London. Series B, Biological sciences | Year: 2013
Recent drought events underscore the vulnerability of Amazon forests to understorey fires. The long-term impact of fires on biodiversity and forest carbon stocks depends on the frequency of fire damages and deforestation rates of burned forests. Here, we characterized the spatial and temporal dynamics of understorey fires (1999-2010) and deforestation (2001-2010) in southern Amazonia using new satellite-based estimates of annual fire activity (greater than 50 ha) and deforestation (greater than 10 ha). Understorey forest fires burned more than 85 500 km(2) between 1999 and 2010 (2.8% of all forests). Forests that burned more than once accounted for 16 per cent of all understorey fires. Repeated fire activity was concentrated in Mato Grosso and eastern Pará, whereas single fires were widespread across the arc of deforestation. Routine fire activity in Mato Grosso coincided with annual periods of low night-time relative humidity, suggesting a strong climate control on both single and repeated fires. Understorey fires occurred in regions with active deforestation, yet the interannual variability of fire and deforestation were uncorrelated, and only 2.6 per cent of forests that burned between 1999 and 2008 were deforested for agricultural use by 2010. Evidence from the past decade suggests that future projections of frontier landscapes in Amazonia should separately consider economic drivers to project future deforestation and climate to project fire risk.
Mumma M.J.,NASA |
Annual Review of Astronomy and Astrophysics | Year: 2011
Final publication forthcoming, pending release of embargo by Nature. Cometary nuclei contain the least modified material from the formative epoch of our planetary system, and their compositions reflect a range of processes experienced by material prior to its incorporation in the cometary nucleus. Dynamical models suggest that icy bodies in the main cometary reservoirs (Kuiper Belt, Oort Cloud) formed in a range of environments in the protoplanetary disk, and (for the Oort Cloud) even in disks surrounding neighboring stars of the Sun's birth cluster. Photometric and spectroscopic surveys of more than 100 comets have enabled taxonomic groupings based on free radical species and on crystallinity of rocky grains. Since 1985, new surveys have provided emerging taxonomies based on the abundance ratios of primary volatiles. More than 20 primary chemical species are now detected in bright comets. Measurements of nuclear spin ratios (in water, ammonia, and methane) and of isotopic ratios (D/H in water and HCN; 14N/15N in CN and HCN) have provided critical insights on factors affecting formation of the primary species. The identification of an abundant product species (HNC) has provided clear evidence of chemical production in the inner coma. Parallel advances have occurred in astrochemistry of hot corinos, circumstellar disks, and dense cloud cores. In this review, we address the current state of cometary taxonomy and compare it with current astrochemical insights. © 2011 by Annual Reviews. All rights reserved.
Astrophysical Journal | Year: 2012
Recent analysis revealed that supergranules (convection cells seen at the Sun's surface) are advected by the zonal flows at depths equal to the widths of the cells themselves. Here we probe the structure of the meridional circulation by cross-correlating maps of the Doppler velocity signal using a series of successively longer time lags between maps. We find that the poleward meridional flow decreases in amplitude with time lag and reverses direction to become an equatorward return flow at time lags >24hr. These cross-correlation results are dominated by larger and deeper cells at longer time lags. (The smaller cells have shorter lifetimes and do not contribute to the correlated signal at longer time lags.) We determine the characteristic cell size associated with each time lag by comparing the equatorial zonal flows measured at different time lags with the zonal flows associated with different cell sizes from a Fourier analysis. This association gives a characteristic cell size of 50 Mm at a 24hr time lag. This indicates that the poleward meridional flow returns equatorward at depths >50 Mm - just below the base of the surface shear layer. A substantial and highly significant equatorward flow (4.6 ± 0.4m s -1) is found at a time lag of 28hr corresponding to a depth of 70 Mm. This represents one of the first positive detections of the Sun's meridional return flow and illustrates the power of using supergranules to probe the Sun's internal dynamics. © 2012. The American Astronomical Society. All rights reserved..
Physical Review D - Particles, Fields, Gravitation and Cosmology | Year: 2013
I show that the high energy neutrino flux predicted to arise from active galactic nuclei cores can explain the PeV neutrinos detected by IceCube without conflicting with the constraints from the observed extragalactic cosmic-ray and γ-ray backgrounds. © 2013 Published by the American Physical Society.
Earth and Planetary Science Letters | Year: 2011
Experimental studies of the partitioning of siderophile elements between metallic and silicate liquids have provided fundamental constraints on the early history and differentiation conditions of the Earth. With many new studies even in the last 20. yr, several models have emerged from the results, including low pressure equilibrium, high pressure equilibrium, and combined high and low pressure multi-stage models. The reasons - silicate melt composition, pressure effects on silicate melt structure, different methods for calculating metal activity coefficients, and the role of deep mantle phases - for the multitude of resulting models have not been specifically addressed before, yet are critical in evaluating the more likely and realistic models. The four reasons leading to the divergence of results will be discussed and evaluated. The behavior of the moderately siderophile elements Ni and Co will be compared using several approaches, each of which results in the same conclusion for Ni and Co. This consistency will eliminate the supposition that one or the other approaches gives a more accurate answer for element partitioning. Newly updated expressions for 11 elements are then derived and presented and applied to the early Earth to evaluate the idea of a late stage equilibration between a core forming metal and silicate melt (or magma ocean). It is possible to explain all 11 elements at conditions of 27-33. GPa, 3300-3600. K,ΔIW =-. 1, for peridotite and a metallic liquid containing 10% of a light element. The main difference between the current result and several other recent modeling efforts is that Mn, V, and Cr are hosted in deep mantle phases as well as the core. The other elements - Ni, Co, Mo, W, P, Cu, Ga, and Pd - are hosted in core, and detailed modeling here shows the importance of accounting for oxygen fugacity, silicate and metallic liquid compositions, as well as temperature and pressure. The idea of late stage metal-silicate equilibrium at a restricted pressure and temperature range leaving a chemical finger print on the upper mantle remains viable. © 2011.
Nature Climate Change | Year: 2014
Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO 2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO 2 in present models and assessments (<1.3 °C) is very unlikely. © 2014 Macmillan Publishers Limited. All rights reserved.
Journal of Climate | Year: 2012
The perennial ice area was drastically reduced to 38% of its climatological average in 2007 but recovered slightly in 2008, 2009, and 2010 with the areas being 10%, 24%, and 11% higher than in 2007, respectively. However, trends in extent and area remained strongly negative at -12.2% and -13.5% decade -1, respectively. The thick component of the perennial ice, called multiyear ice, as detected by satellite data during the winters of 1979-2011 was studied, and results reveal that the multiyear ice extent and area are declining at an even more rapid rate of -15.1% and -17.2% decade -1, respectively, with a record low value in 2008 followed by higher values in 2009, 2010, and 2011. Such a high rate in the decline of the thick component of the Arctic ice cover means a reduction in the average ice thickness and an even more vulnerable perennial ice cover. The decline of the multiyear ice area from 2007 to 2008 was not as strong as that of the perennial ice area from 2006 to 2007, suggesting a strong role of second-year ice melt in the latter. The sea ice cover is shown to be strongly correlated with surface temperature, which is increasing at about 3 times the global average in the Arctic but appears weakly correlated with the Arctic Oscillation (AO), which controls the atmospheric circulation in the region. An 8-9-yr cycle is apparent in the multiyear ice record, which could explain, in part, the slight recovery in the last 3 yr.