Time filter

Source Type

Greer J.B.,National Geospatial-Intelligence Agency (NGA)
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2010

The linear mixture model for hyperspectral images assumes that all the image spectra lie on a high-dimensional simplex with corners called endmembers. Given the set of endmembers, one typically calculates fractional abundances for each pixel using constrained least squares. This method likely reconstructs the spectra as combinations of most, if not all, the endmembers. We instead assume that pixels are combinations of only a few of the end-members, yielding sparse abundance vectors. We introduce a new method, similar to Matching Pursuit (MP) from the signal processing literature, to calculate these sparse abundances. We combine this sparse demixing algorithm with dictionary learning methods to automatically calculate endmembers for a provided set of spectra. We apply our method to an AVIRIS image of Cuprite, NV, for which we compare our endmembers with spectral signatures from the USGS spectral library. © 2010 SPIE. Source

Minter C.F.,National Geospatial-Intelligence Agency (NGA)
Advances in the Astronautical Sciences | Year: 2010

Many satellite tracking systems today, like GNSS, require orbit estimates for large (>20 satellites) constellations in near-real time. However, as the number of satellites in a constellation increases, the state size and number of computations required in the Kalman filter process also increase. The number of computations can make the simultaneous ephemerides solution for the entire constellation impractical. Estimating each satellite's ephemeris individually may not be ideal if satellites are correlated through common parameters, like gravity and other perturbation terms. This paper examines a variation of the extended Kalman filter, the ensemble Kalman filter, which is particularly adept at solving nonlinear systems with large state sizes. The ensemble Kalman filter requires fewer operations since it avoids the transition matrix calculation and the n x n matrix multiplications found in the traditional extended Kalman filter. In general, the ensemble Kalman filter formulation has improved stability and accuracy since the nonlinear model characteristics are preserved when time-propagating the state error covariance matrix. Results show that the ensemble Kalman filter is computationally more efficient and more accurate as compared with the traditional extended Kalman filter for large constellation sizes. Source

Crispell D.,National Geospatial-Intelligence Agency (NGA) | Mundy J.,Brown University | Taubin G.,Brown University
IEEE Transactions on Geoscience and Remote Sensing | Year: 2012

Given a set of high-resolution images of a scene, it is often desirable to predict the scene's appearance from viewpoints not present in the original data for purposes of change detection. When significant 3-D relief is present, a model of the scene geometry is necessary for accurate prediction to determine surface visibility relationships. In the absence of an a priori high-resolution model (such as those provided by LIDAR), scene geometry can be estimated from the imagery itself. These estimates, however, cannot, in general, be exact due to uncertainties and ambiguities present in image data. For this reason, probabilistic scene models and reconstruction algorithms are ideal due to their inherent ability to predict scene appearance while taking into account such uncertainties and ambiguities. Unfortunately, existing data structures used for probabilistic reconstruction do not scale well to large and complex scenes, primarily due to their dependence on large 3-D voxel arrays. The work presented in this paper generalizes previous probabilistic 3-D models in such a way that multiple orders of magnitude savings in storage are possible, making high-resolution change detection of large-scale scenes from high-resolution aerial and satellite imagery possible. Specifically, the inherent dependence on a discrete array of uniformly sized voxels is removed through the derivation of a probabilistic model which represents uncertain geometry as a density field, allowing implementations to efficiently sample the volume in a nonuniform fashion. © 2006 IEEE. Source

Greer J.B.,National Geospatial-Intelligence Agency (NGA)
IEEE Transactions on Image Processing | Year: 2012

In the LMM for hyperspectral images, all the image spectra lie on a high-dimensional simplex with corners called endmembers. Given a set of endmembers, the standard calculation of fractional abundances with constrained least squares typically identifies the spectra as combinations of most, if not all, endmembers. We assume instead that pixels are combinations of only a few endmembers, yielding abundance vectors that are sparse. We introduce sparse demixing (SD), which is a method that is similar to orthogonal matching pursuit, for calculating these sparse abundances. We demonstrate that SD outperforms an existing L 1 demixing algorithm, which we prove to depend adversely on the angles between endmembers. We combine SD with dictionary learning methods to calculate automatically endmembers for a provided set of spectra. Applying it to an airborne visible/infrared imaging spectrometer image of Cuprite, NV, yields endmembers that compare favorably with signatures from the USGS spectral library. © 2011 IEEE. Source

News Article
Site: http://phys.org/technology-news/

High-performance computing (or HPC) enables discoveries in practically every field of science—not just those typically associated with supercomputers like chemistry and physics, but also in the social sciences, life sciences and humanities. By combining superfast and secure networks, cutting-edge parallel computing and analytics software, advanced scientific instruments and critical datasets across the U.S., NSF's cyber-ecosystem lets researchers investigate questions that can't otherwise be explored. NSF has supported advanced computing since its beginning and is constantly expanding access to these resources. This access helps tens of thousands of researchers each year—from high-school students to Nobel Prize winners—expand the frontiers of science and engineering, regardless of whether their institutions are large or small, or where they are located geographically. Below are 10 examples of research enabled by NSF-supported advanced computing resources from across all of science. Pineapples don't just taste good—they have a juicy evolutionary history. Recent analyses using computing resources that are part of the iPlant Collaborative revealed an important relationship between pineapples and crops like sorghum and rice, allowing scientists to home in on the genes and genetic pathways that allow plants to thrive in water-limited environments. Led by the University of Arizona, Texas Advanced Computing Center, Cold Spring Harbor Laboratory and University of North Carolina at Wilmington, iPlant was established in 2008 with NSF funding to develop cyberinfrastructure for life sciences research, provide powerful platforms for data storage and bioinformatics and democratize access to U.S. supercomputing capabilities. This week, iPlant announced it will host a new platform, Digital Imaging of Root Traits (DIRT), that lets scientists in the field measure up to 76 root traits merely by uploading a photograph of a plant's roots. Software that simulates the effect of an electric charge passing through a transistor—only a few atoms wide—is helping researchers to explore alternative materials that may replace silicon in future nanodevices. The software simulations designed by Purdue researcher Gerhard Klimeck and his group, available on the nanoHUB portal, provide new information about the limits of current semiconductor technologies and are helping design future generations of nanoelectronic devices. NanoHUB, supported by NSF, is the first broadly successful, scientific end-to-end cloud computing environment. It provides a library of 3,000 learning resources to 195,000 users worldwide. Its 232 simulation tools are used in the cloud by over 10,800 researchers and students annually. Earthquakes originate through complex interactions deep below the surface of the Earth, making them notoriously difficult to predict. The Southern California Earthquake Center (SCEC) and its lead scientist Thomas Jordan use massive computing power to simulate the dynamics of earthquakes. In doing so, SCEC helps to provide long-term earthquake forecasts and more accurate hazard assessments. In 2014, the SCEC team investigated the earthquake potential of the Los Angeles Basin, where the Pacific and North American Plates run into each other at the San Andreas Fault. Their simulations showed that the basin essentially acts like a big bowl of jelly that shakes during earthquakes, producing more high-shaking ground motions than the team expected. Using the NSF-funded Blue Waters supercomputer at the National Center for Supercomputing Applications and the Department of Energy-funded Titan supercomputer at the Oak Ridge Leadership Computing Facility, the researchers turned their simulations into seismic hazard models. These models describe the probability of an earthquake occurring in a given geographic area, within a given window of time and with ground motion intensity exceeding a given threshold. Nearly 33,000 people die in the U.S. each year due to motor vehicle crashes, according to the National Highway Traffic Safety Administration. Modern restraint systems save lives, but some deaths and injuries remain—and restraints themselves can cause injuries. Researchers from the Center for Injury Biomechanics at Wake Forest University used the Blacklight supercomputer at the Pittsburgh Supercomputing Center to simulate the impacts of car crashes with much greater fidelity than crash-test dummies can. By studying a variety of potential occupant positions, they're uncovering important factors that lead to more severe injuries, as well as ways to potentially mitigate these injuries, using advanced safety systems. Since Albert Einstein, scientists have believed that when major galactic events like black hole mergers occur, they leave a trace in the form of gravitational waves—ripples in the curvature of space-time that travel outward from the source. Advanced LIGO is a project designed to capture signs of these events. Since gravitational waves are expected to travel at the speed of light, detecting them requires two gravitational wave observatories, located 1,865 miles apart and working in unison, that can triangulate the gravitational wave signals and determine the source of the wave in the sky. In addition to being an astronomical challenge, Advanced LIGO is also a "big data" problem. The observatories take in huge volumes of data that must be analyzed to determine their meaning. Researchers estimate that Advanced LIGO will generate more than 1 petabyte of data a year, the equivalent of 13.3 years' worth of high-definition video. To achieve accurate and rapid gravity wave detection, researchers use Extreme Science and Engineering Discovery Environment (XSEDE)—a powerful collection of advanced digital resources and services—to develop and test new methods for transmitting and analyzing these massive quantities of astronomical data. Advanced LIGO came online in September and advanced computing will play an integral part in its future discoveries. What happens when a supercomputer reaches retirement age? In many cases, it continues to make an impact in the world. The NSF-funded Ranger supercomputer is one such example. In 2013, after five years as one of NSF's flagship computer systems, the Texas Advanced Computing Center (TACC) disassembled Ranger and shipped it from Austin, Texas to South Africa, Tanzania and Botswana to give root to a young and growing supercomputing community. With funding from NSF, TACC experts led training sessions in South Africa in December 2014. In November 2015, 19 delegates from Africa came to the U.S. to attend a two-day workshop at TACC as well as the Supercomputing 2015 International Conference for High Performance Computing. The effort is intended, in part, to help provide the technical expertise needed to successfully staff and operate the Square Kilometer Array, a new radio telescope being built in Australia and Africa which will offer the highest resolution images in all of astronomy. In September 2015, President Obama announced plans to improve maps and elevation models of the Arctic, including Alaska. To that end, NSF and the National Geospatial-Intelligence Agency (NGA) are supporting the development of high-resolution Digital Elevation Models in order to provide consistent coverage of the globally significant region. The models will allow researchers to see in detail how warming in the region affects the landscape in remote areas, and allow them to compare changes over time. The project relies, in part, on the computing and data analysis powers of Blue Waters, which will let researchers store, access and analyze large numbers of images and models. To solve some of society's most pressing long-term problems, the U.S. needs to educate and train the next generation of scientists and engineers to use advanced computing effectively. This pipeline of training begins as early as high school and continues throughout the careers of scientists. Last summer, TACC hosted 50 rising high school juniors and seniors to participate in an innovative new STEM program, CODE@TACC. The program introduced students to high-performance computing, life sciences and robotics. On the continuing education front, XSEDE offers hundreds of training classes each year to help researchers update their skills and learn new ones. High-performance computing has another use in education: to assess how students learn and ultimately to provide personalized educational paths. A recent report from the Computing Research Association, "Data-Intensive Research in Education: Current Work and Next Steps," highlights insights from two workshops on data-intensive education initiatives. The LearnSphere project at Carnegie Mellon University, an NSF Data Infrastructure Building Blocks project, is putting these ideas into practice. Experimenting with cloud computing on new platforms In 2014, NSF invested $20 million to create two cloud computing testbeds that let the academic research community develop and experiment with cloud architectures and pursue new, architecturally-enabled applications of cloud computing. CloudLab (with sites in Utah, Wisconsin and South Carolina) came online in May 2015 and provides researchers with the ability to create custom clouds and test adjustments at all levels of the infrastructure, from the bare metal on up. Chameleon, a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin, went into production in July 2015. Both serve hundreds of researchers at universities across the U.S. and let computer scientists experiment with unique cloud architectures in ways that weren't available before. The NSF-supported "Comet" system at the San Diego Supercomputer Center (SDSC) was dedicated in October and is already aiding scientists in a number of fields, including domains relatively new for supercomputer integration, such as neuroscience. SDSC recently received a major grant to expand the Neuroscience Gateway, which provides easy access to advanced cyberinfrastructure tools and resources through a web-based portal, and can significantly improve the productivity of researchers. The gateway will contribute to the national BRAIN Initiative and deepen our understanding of the human brain. Explore further: Innovative new supercomputers increase nation's computational capacity and capability

Discover hidden collaborations