Time filter

Source Type

Arlington, TX, United States

The National Science Foundation is a United States government agency that supports fundamental research and education in all the non-medical fields of science and engineering. Its medical counterpart is the National Institutes of Health. With an annual budget of about US$7.0 billion , the NSF funds approximately 20% of all federally supported basic research conducted by the United States' colleges and universities. In some fields, such as mathematics, computer science, economics and the social science, the NSF is the major source of federal backing.The NSF's director, deputy director, and the 24 members of the National Science Board are appointed by the President of the United States, and confirmed by the United States Senate. The director and deputy director are responsible for administration, planning, budgeting and day-to-day operations of the foundation, while the NSB meets six times a year to establish its overall policies. The current NSF director, confirmed in March 2014, is France A. Córdova, former president of Purdue University. Wikipedia.

Scheiner S.M.,National Science Foundation
Oikos | Year: 2012

A metric of biodiversity is proposed that combines three of its key components: abundance, phylogeny, and ecological function. This metric is an expansion of the current abundance-based metric that uses Hill numbers, the effective number of types in a sample if all types had the same mean proportional abundance. I define analogous proportional measures of phylogenetic divergence and functional distinctiveness. Phylogenetic divergence is measured as the sum of the proportional share of each species of a given branch of a phylogeny. Functional distinctiveness can be measured in two ways, as the proportional share of each species of a specified ecological function, or as the relative distance of each species based on functional trait values. Because all three aspects of biodiversity are measured in the same fashion (relative proportions) in similar units (effective numbers of species), an integrated metric can be defined. The combined metric provides understanding of covariation among the components and how management for one component may trade off against others. The metric can be partitioned into components of richness and evenness, and into subsets and variation among subsets, all of which can be related through a simple multiplicative framework. This metric is a complement to, rather than a replacement of, current metrics of phylogenetic and functional diversity. More work is needed to link this new metric to ecological theory, determine its error structure, and devise methods for its effective assessment. © 2012 The Authors. Oikos © 2012 Nordic Society Oikos. Source

Haq B.U.,National Science Foundation | Haq B.U.,University Pierre and Marie Curie
Global and Planetary Change | Year: 2014

Eustatic sea-level changes of the Cretaceous are reevaluated based on a synthesis of global stratigraphic data. A new terminology for local/regional or relative sea-level changes (eurybatic shifts) is proposed to distinguish them from global (eustatic) sea-level changes, with the observation that all measures of sea-level change in any given location are eurybatic, even when they include a strong global signal. Solid-earth factors that influence inherited regional topography and thus modify physical measures of amplitude of the sea-level rises and falls locally are reviewed. One of these factors, dynamic topography (surface expression of mass flow in the upper mantle on land- and seascapes), is considered most pertinent in altering local measures of amplitude of sea-level events on third-order time scales (0.5-3.0. Myr). Insights gained from these models have led to the reconciliation of variance between amplitude estimates of eurybatic shifts in any given region and global measures of eustatic changes. Global estimates of third-order events can only be guesstimated at best by averaging the eurybatic data from widely distributed time-synchronous events. Revised curves for both long-term and short-term sea-level variations are presented for the Cretaceous Period. The curve representing the long-term envelope shows that average sea levels throughout the Cretaceous remained higher than the present day mean sea level (75-250. m above PDMSL). Sea level reached a trough in mid Valanginian (~. 75. m above PDMSL), followed by two high points, the first in early Barremian (~. 160-170. m above PDMSL) and the second, the highest peak of the Cretaceous, in earliest Turonian (~. 240-250. m above PDMSL). The curve also displays two ~. 20. Myr-long periods of relatively high and stable sea levels (Aptian through early Albian and Coniacian through Campanian). The short-term curve identifies 58 third-order eustatic events in the Cretaceous, most have been documented in several basins, while a smaller number are included provisionally as eustatic, awaiting confirmation. The amplitude of sea-level falls varies from a minimum of ~. 20. m to a maximum of just over 100. m and the duration varies between 0.5 and 3. Myr. The causes for these relatively rapid, and at times large amplitude, sea-level falls in the Cretaceous remain unresolved, although based mainly on oxygen-isotopic data, the presence of transient ice cover on Antarctica as the driver remains in vogue as an explanation. This idea has, however, suffered a recent setback following the discovery of pristine foraminiferal tests in the Turonian of Tanzania whose oxygen-isotopic values show little variation, implying absence of glacioeustasy at least in the Turonian. The prevalence of 4th-order (~. 400. Kyr) cyclicity through most of the Cretaceous (and elsewhere in the Paleozoic, Jurassic and Cenozoic) implies that the periodicity on this time scale, presumably driven by long-term orbital eccentricity, may be a fundamental feature of depositional sequences throughout the Phanerozoic. © 2013. Source

Werbos P.J.,National Science Foundation
IEEE Computational Intelligence Magazine | Year: 2011

This paper reviews the evolution of four generations of concepts of the "smart grid," the role of computational intelligence in meeting their needs, and key examples of relevant research and tools. The first generation focused on traditional concepts like building more wires, automated meters, workforce development, and reducing blackouts, but it already had many uses for computational intelligence. The second generation, promulgated by Massoud Amin at EPRI, entailed greater use of global control systems and stability concepts, and coincided with new issues of market design and time of day pricing. New third generation and fourth generation concepts aim for a truly intelligent power grid, addressing new requirements for a sustainable global energy system, making full use of new methods for optimization across time, pluggable electric vehicles, renewable energy, storage, distributed intelligence and new neural networks for handling complexity and stochastic challenges. Important opportunities for society and new fundamental research challenges exist throughout. © 2011 IEEE. Source

Roco M.C.,National Science Foundation
Journal of Nanoparticle Research | Year: 2011

A global scientific and societal endeavor was set in motion by the nanotechnology vision formulated in 1999 that inspired the National Nanotechnology Initiative (NNI) and other national and international R&D programs. Establishing foundational knowledge at the nanoscale has been the main focus of the nanotechnology research community in the first decade. As of 2009, this new knowledge underpinned about a quarter of a trillion dollars worldwide market, of which about $91 billion was in US products that incorporate nanoscale components. Nanotechnology is already evolving toward becoming a general-purpose technology by 2020, encompassing four generations of products with increasing structural and dynamic complexity: (1) passive nanostructures, (2) active nanostructures, (3) nanosystems, and (4) molecular nanosystems. By 2020, the increasing integration of nanoscale science and engineering knowledge and of nanosystems promises mass applications of nanotechnology in industry, medicine, and computing, and in better comprehension and conservation of nature. Nanotechnology's rapid development worldwide is a testimony to the transformative power of identifying a concept or trend and laying out a vision at the synergistic confluence of diverse scientific research areas. This chapter provides a brief perspective on the development of the NNI since 2000 in the international context, the main outcomes of the R&D programs after 10 years, the governance aspects specific to this emerging field, lessons learned, and most importantly, how the nanotechnology community should prepare for the future. © Springer Science+Business Media B.V. 2011. Source

Ardell A.J.,National Science Foundation
Acta Materialia | Year: 2010

Data on the coarsening of γ (Ni3Al) precipitates in binary Ni-Al alloys are re-examined quantitatively in light of the theory of transinterface diffusion-controlled (TIDC) coarsening, which predicts time-dependent behavior of the type (r)n ≈ kTt for the growth of precipitates of average radius (r) and X ≈ kTt -1/n for the depletion of solute concentration in the matrix, X. The exponent n is intimately related to the width of the precipitate-matrix interface, δ, which depends on r as δ ∝ rm (m = n -2). The scaled distribution of particle sizes (PSD) also depends on n, while the rate constants kT and kT depend on the thermophysical constants of the alloy system. In Ni-Al alloys n = 2.4, determined from analyzing three different sets of PSDs. Quantitative analysis yields interfacial free energies and chemical diffusion coefficients that agree exceptionally well with extant data. The TIDC theory is the only theory that is consistent, both qualitatively and quantitatively, with the entirety of the data. Source

Discover hidden collaborations