Naval Surface Warfare Center Panama City Division

Panama City, FL, United States

Naval Surface Warfare Center Panama City Division

Panama City, FL, United States
Time filter
Source Type

Gazagnaire J.,Naval Surface Warfare Center Panama City Division
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2017

Detection and classification of targets is a process that we perform every day, often without even realizing it; the ability to locate and recognize a specific face in a crowd, to be able to identify a bird by its song, or to know what is for dinner simply by the aroma. Much effort has been spent in trying to replicate these uncanny abilities of our senses through purpose-built sensors. For many of these sensing modalities, automatic target recognition algorithms have been developed to automate the process of locating and recognizing the occurrence of a specific state from within the sensed data. This survey is an overview of the metrics used to assess and quantify the quality or goodness' of the data, mainly for imaging sensors, from the perspective of automatic target recognition performance. Digital image data go through several transformations as they are moved from the recording sensor to personal electronic devices, resulting in distortions. There has been considerable research in the area of quality assessment of digital image and video media. Often, the image quality assessment approaches rely on having the original image or at least information about the original image available for the evaluation. These approaches may also rely on knowing the source of distortion. The accuracy is typically gauged using the image quality assessment made by the human visual system. As a consequence, the metrics measure aspects of the image that are tied closely to the human perception of quality. Evaluating image quality from an automatic target recognition perspective offers many challenges, as the original data is not likely to be available. Additionally, there may be multiple unknown sources of image quality degradation forcing the need for a general solution. Finally, the human perception of good quality' may not necessarily correlate to optimal image quality for automatic target recognition performance, in which case metrics modeled after the human visual system may not be the best choice for image quality assessment. The goal of this survey is to provide a high-level background and overview of how image quality assessment is currently performed for different sensing modalities. Additionally, the challenges that are being addressed within the various disciplines will be discussed and commonalities will be highlighted. © 2017 SPIE.

Cook M.,Naval Surface Warfare Center Panama City Division | Marchand B.,Naval Surface Warfare Center Panama City Division
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2017

For Sonar Automatic Target Recognition problems, the number of mine-like objects is relatively small compared to the number non-mine-like objects available. This creates a heavy bias towards non-mine-like objects and increases the processing resources needed for classifier training. In order to reduce resource needs and the bias towards non-mine-like objects, we investigate selection methods for reducing the non-mine-like target samples while still maintaining as much of the original training information as possible. Specifically, we investigate methods for reducing sample size and bias while maintaining good classifier performance. Several methods are considered during this investigation that cover a wide range of techniques, including clustering and evolutionary algorithms. Each method is evaluated based on the classifier performance when trained on the chosen data samples and the execution time to select the new training set. Results on each method tested are presented using sonar data collected using a sidescan sonar system. © 2017 SPIE.

Marchand B.,Naval Surface Warfare Center Panama City Division | G-Michael T.,Naval Surface Warfare Center Panama City Division
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2017

Some synthetic aperture imaging techniques have strict path requirements and need to be processed in overlapping chunks. Path adjustments applied for images can decorrelate and alter the appearance of overlapping regions, which can make mosaicing the images chunks for post mission analysis challenging. In this paper, we investigate the application of automatic change detection techniques to the problem of high accuracy, multi-band mosaicing. We will describe and demonstrate the aspects of synthetic aperture sonar (SAS) change detection, in particular automated change detection (ACD) co-registration processing that enables the production of good quality of seabed maps known as mosaicing where individual SAS runs from different traverses are pieced together in adjacent positions to form a complete image. © 2017 SPIE.

G-Michael T.,Naval Surface Warfare Center Panama City Division | Roberts R.G.,Florida State University
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2017

Change Detection is the process of detecting changes from pairs of multi-temporal sonar images that are surveyed approximately from the same location. The problem of change detection and subsequent anomaly feature extraction is complicated due to several factors such as the presence of random speckle pattern in the images, variability in the seafloor environmental conditions, and platform instabilities. These complications make the detection and classification of targets difficult. In this paper, we propose a change detection technique for multi-temporal synthetic aperture sonar (SAS) images, based on independent component analysis (ICA). ICA is a well-established statistical signal processing technique that aims at decomposing a set of multivariate signals (in our case SAS images) into a base of statistically independent data-vectors with minimal loss of information content. The goal of ICA is to linearly transform the data such that the transformed variables are as statistically independent from each other as possible. The changes in the imaging scene are detected in reduced second or higher order dependencies by ICA and the correlation among the multi-temporal images is removed. Thus removing dependencies will leave with the change features that will be further analyzed for detection and classification. Test results of the proposed method on SAS images (snippets) of declared changes from automated change detection (ACD) process will be presented. These results will illustrate the effectiveness of ICA for reduction of false alarms in the ACD process. © 2017 SPIE.

Bryner D.,Naval Surface Warfare Center Panama City Division | Bryner D.,Florida State University | Srivastava A.,Florida State University | Huynh Q.,Naval Surface Warfare Center Panama City Division
Computer Vision and Image Understanding | Year: 2013

We present a variational framework for naturally incorporating prior shape knowledge in guidance of active contours for boundary extraction in images. This framework is especially suitable for images collected outside the visible spectrum, where boundary estimation is difficult due to low contrast, low resolution, and presence of noise and clutter. Accordingly, we illustrate this approach using the segmentation of various objects in synthetic aperture sonar (SAS) images of underwater terrains. We use elastic shape analysis of planar curves in which the shapes are considered as elements of a quotient space of an infinite dimensional, non-linear Riemannian manifold. Using geodesic paths under the elastic Riemannian metric, one computes sample mean and covariances of training shapes in each classes and derives statistical models for capturing class-specific shape variability. These models are then used as shape priors in a variational setting to solve for Bayesian estimation of desired contours as follows. In traditional active contour models curves are driven towards minimum of an energy composed of image and smoothing terms. We introduce an additional shape term based on shape models of relevant shape classes. The minimization of this total energy, using iterated gradient-based updates of curves, leads to an improved segmentation of object boundaries. This is demonstrated using a number of shape classes in two large SAS image datasets.

Johnson B.A.,Naval Surface Warfare Center Panama City Division | Matthews C.,Naval Surface Warfare Center Panama City Division
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2016

The Art Gallery Problem (AGP) is the name given to a constrained optimization problem meant to determine the maximum amount of sensor coverage while utilizing the minimum number of resources. The AGP is significant because a common issue among surveillance and interdiction systems is obtaining an understanding of the optimal position of sensors and weapons in advance of enemy combatant maneuvers. The implication that an optimal position for a sensor to observe an event or for a weapon to engage a target autonomously is usually very clear after the target has passed, but for autonomous systems the solution must at least be conjectured in advance for deployment purposes. This abstract applies the AGP as a means to solve where best to place underwater sensor nodes such that the amount of information acquired about a covered area is maximized while the number of resources used to gain that information is minimized. By phrasing the ISR/interdiction problem this way, the issue is addressed as an instance of the AGP. The AGP is a member of a set of computational problems designated as nondeterministic polynomial-time (NP)-hard. As a member of this set, the AGP shares its members' defining feature, namely that no one has proven that there exists a deterministic algorithm providing a computationally-tractable solution to the AGP within a finite amount of time. At best an algorithm meant to solve the AGP can asymptotically approach perfect coverage with minimal resource usage but providing perfect coverage would either break the minimal resource usage constraint or require an exponentially-growing amount of time. No perfectly-optimal solution yet exists to the AGP, however, approximately optimal solutions to the AGP can approach complete area or barrier coverage while simultaneously minimizing the number of sensors and weapons utilized. A minimal number of underwater sensor nodes deployed can greatly increase the Mean Time Between Operational Failure (MTBOF) and logistical footprint. The resulting coverage optimizes the likelihood of encounter given an arbitrary sensor profile and threat from a free field statistical model approach. The free field statistical model is particularly applicable to worst case scenario modeling in open ocean operational profiles where targets to do not follow a particular pattern in any of the modeled dimensions. We present an algorithmic testbed which shows how to achieve approximately optimal solutions to the AGP for a network of underwater sensor nodes with or without effector systems for engagement while operating under changing environmental circumstances. The means by which we accomplish this goal are three-fold: 1) Develop a 3D model for the sonar signal propagating through the underwater environment 2) Add rigorous physics-based modeling of environmental events which can affect sensor information acquisition 3) Provide innovative solutions to the AGP which account for the environmental circumstances affecting sensor performance. © 2016 SPIE.

Cobb J.T.,Naval Surface Warfare Center Panama City Division | Zare A.,University of Missouri
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2013

In this paper we describe an unsupervised approach to seabed co-segmentation over the multiple sonar images collected in sonar surveys. We adapt a traditional single image segmentation texton-based approach to the sonar survey task by modifying the texture extraction filter bank to better model possible sonar image textures. Two different algorithms for building a universal texton library are presented that produce common pixel labels across multiple images. Following pixel labeling with the universal texton library, images are quantized into superpixels and co-segmented using a DP clustering algorithm. The segmentation results for both texton library selection criteria are contrasted and compared for a labeled set of SAS images with various discernable textures. © 2013 SPIE.

Suiter H.R.,Naval Surface Warfare Center Panama City Division
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

When using flat windows between an air medium and one with higher index of refraction, the surface becomes optically active and a number of aberrations are induced. One affecting the optical control of a remotely-piloted underwater vehicle is the apparent pincushion distortion resulting from Snell's law at the interface. Small wide-angle lenses typically have the opposite problem, a barrel distortion caused by limitations in the number of lens surfaces and the constraints of cost. An experimental calibration is described in which the barrel distortion of the lens compensated for most of the inherent pincushion of the change in medium. ZEMAXTM models will be used to elucidate this phenomenon with a published lens design.* With careful selection of the lens and additional corrector, the resultant image can be made almost rectilinear, thus easing steering control and automatic target recognition. © 2012 SPIE.

Lance R.M.,Naval Surface Warfare Center Panama City Division | Lance R.M.,Duke University | Bass C.R.,Duke University
Diving and Hyperbaric Medicine | Year: 2015

The first cases of underwater blast injury appeared in the scientific literature in 1917, and thousands of service members and civilians were injured or killed by underwater blast during WWII. The prevalence of underwater blast injuries and occupational blasting needs led to the development of many safety standards to prevent injury or death. Most of these standards were not supported by experimental data or testing. In this review, we describe existing standards, discuss their origins, and we comprehensively compare their prescriptions across standards. Surprisingly, we found that most safety standards had little or no scientific basis, and prescriptions across standards often varied by at least an order of magnitude. Many published standards traced back to a US Navy 500 psi guideline, which was intended to provide a peak pressure at which injuries were likely to occur. This standard itself seems to have been based upon a completely unfounded assertion that has propagated throughout the literature in subsequent years. Based on the limitations of the standards discussed, we outline future directions for underwater blast injury research, such as the compilation of epidemiological data to examine actual injury risk by human beings subjected to underwater blasts. © 2015, South Pacific Underwater Medicine Society. All right reserved.

News Article | June 4, 2016

U.S. Navy researchers have developed a high-tech, in-helmet display that will assist underwater personnel by using augmented reality. The device not only shows design inspiration from the world of sci-fi but is said to increase the safety and efficiency of divers in the line of duty. Those who took diving as a hobby already know that the experience is fun, but it comes with a degree of clumsiness and isolation. Standard masks narrow down the field of view, while the neoprene gloves hinder precision. Professional divers could use any help they can get to make their work easier and more streamlined. The Navy acknowledges their need: it has built an underwater head-up display (HUD) prototype that allows divers to check their location and tap into sonar data by looking straight ahead, thus eliminating the need for a smartwatch display. The leader of the research team, Dennis Gallagher, says that "a capability similar to something from an Iron Man movie" is in store to those who will use the new helmet. To put it shortly, all the relevant information can be viewed "within the helmet." Surface sources, such as a ship, can send out information to the Divers Augmented Vision Display (DAVD). Future improvements to the device could bring sonar sensors mounted on the helmet, making it even easier to collect and display info. Underwater work usually involves low-light conditions or salty water; any additional tidbit of visual clue can assist the process. The DAVD displays the image in front of both eyes to provide visual depth, which is essential to creating accurate augmented reality. The Navy's device is not the only HUD in use by the military. The technology is also available to fighter jet pilots. However, the challenge to creating a wireless, submerged version is a daunting task for researchers. The team is building the prototype at the Naval Surface Warfare Center Panama City Division. Although still in its early stage, the product could grow to see widespread use both by rescue and military authorities and the consumer market. In-water simulation testing is set for October, the Navy says. Other defense organizations, such as DARPA, are working on their own HUD for ground forces, and you can read all about it in our coverage. © 2016 Tech Times, All rights reserved. Do not reproduce without permission.

Loading Naval Surface Warfare Center Panama City Division collaborators
Loading Naval Surface Warfare Center Panama City Division collaborators