Malibu, CA, United States
Malibu, CA, United States

Time filter

Source Type

Wood B.M.,Louisiana State University | Ham K.,Louisiana State University | Hussey D.S.,U.S. National Institute of Standards and Technology | Jacobson D.L.,U.S. National Institute of Standards and Technology | And 6 more authors.
Nuclear Instruments and Methods in Physics Research, Section B: Beam Interactions with Materials and Atoms | Year: 2014

The uptake of hydrogen by lanthanum pentanickel (LaNi5) to form lanthanum nickel hydride (LaNi5H6) is followed with three-dimensional imaging by neutron tomography. The hydrogen absorption process is slower than the time needed for acquiring a single radiograph, about 10 s, but fast relative to the time to acquire a fully-sampled tomographic data set, about 6000 s. A novel data acquisition scheme is used with angles based upon the Greek Golden ratio, a scheme which allows considerable flexibility in post-acquisition tomography reconstruction. Even with tomographic undersampling, the granular structure for the conversion of LaNi5 particles to LaNi5H6 particles is observed and visually tracked in 3D. Over the course of five sequential hydrogen uptake runs with various initial hydrogen pressures, some grains are repeatedly observed. © 2014 Elsevier B.V. All rights reserved.


Mundhenk T.N.,HRL Labs. LLC | Sundareswara R.,HRL Labs. LLC | Gerwe D.R.,Boeing Company | Chen Y.,HRL Labs. LLC
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

Super resolution image reconstruction allows for the enhancement of images in a video sequence that is superior to the original pixel resolution of the imager. Difficulty arises when there are foreground objects that move differently than the background. A common example of this is a car in motion in a video. Given the common occurrence of such situations, super resolution reconstruction becomes non-trivial. One method for dealing with this is to segment out foreground objects and quantify their pixel motion differently. First we estimate local pixel motion using a standard block motion algorithm common to MPEG encoding. This is then combined with the image itself into a five dimensional mean-shift kernel density estimation based image segmentation with mixed motion and color image feature information. This results in a tight segmentation of objects in terms of both motion and visible image features. The next step is to combine segments into a single master object. Statistically common motion and proximity are used to merge segments into master objects. To account for inconsistencies that can arise when tracking objects, we compute statistics over the object and fit it with a generalized linear model. Using the Kullback-Leibler divergence, we have a metric for the goodness of the track for an object between frames. © 2011 SPIE-IS&T.


Wang S.,HRL Labs LLC | Verbrugge M.,General Motors | Vu L.,HRL Labs LLC | Baker D.,General Motors | Wang J.S.,HRL Labs LLC
Journal of the Electrochemical Society | Year: 2013

We have developed a battery state estimator based on a finite impulse response filter. Simulation results indicate that the estimator gives accurate prediction and numerically-stable performance in the regression of filter coefficients and the open-circuit potential, which yields the battery state of charge. The estimator is also able to predict battery power capabilities. Comparison of the measured and predicted state of charge (SOC) and the charge and discharge power capabilities (state of power, SOP) of a Li-ion battery are provided. Predictions for the SOC and SOP agree well with experimental measurements, demonstrating the estimator's application in battery management systems. In particular, this new approach appears to be more flexible than previous models; we show that it can capture the behavior of batteries governed by various physicochemical phenomena. © 2013 The Electrochemical Society.


Wang J.S.,HRL Labs LLC | Liu P.,HRL Labs LLC | Soukiazian S.,HRL Labs LLC | Tataria H.,General Motors | And 4 more authors.
Journal of Power Sources | Year: 2014

Start-stop systems require the battery to provide high power, endure shallow cycling, and exhibit long cycle life. The LFP/LTO (lithium iron phosphate/lithium titanate) battery is a potential candidate to meet such requirements because, at room temperature, both materials can be operated at high rate and have good stability (calendar and cycle life). In this work, we have investigated the feasibility of using LixFePO 4/Li4+3yTi5O12 (0 < x < 1, 0 < y < 1) lithium ion batteries for start-stop systems. We evaluate both the rate and temperature dependence of LFP/LTO cells subjected to galvanostatic charge/discharge cycling. Excellent rate performance was observed at temperatures above or at ambient. However, at low temperatures, significant resistance is observed, and this must be addressed for the LFP/LTO system to be viable. In addition, we investigate the SOC dependence of equivalent circuit parameters using triangular current and voltage excitation method to facilitate the implementation of circuit-based control algorithms for vehicle applications. Parameter values are nearly constant over the broad voltage-plateau region of the substantially two-phase behavior of both the LFP and LTO materials. © 2014 Published by Elsevier B.V.


Mundhenk T.N.,HRL Labs LLC | Ni K.-Y.,HRL Labs LLC | Chen Y.,HRL Labs LLC | Kim K.,HRL Labs LLC | Owechko Y.,HRL Labs LLC
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2012

An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples. © 2012 SPIE-IS&T.


Mundhenk T.N.,HRL Labs LLC | Baron J.,HRL Labs LLC | Matic R.M.,HRL Labs LLC
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2011

Passive millimeter wavelength (PMMW) video holds great promise given its ability to see targets and obstacles through fog, smoke and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snow like) noise and a slower forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise is feature change in time which is abrupt, but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of Non-uniformity correction (NUC) and Eigen Image Wavelet Transformation. The combination allows for online removal of time varying fixed pattern noise even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. The fixed pattern and shot noise filters are all efficient allowing for real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees and utility poles at 20 frames per second. © 2011 SPIE.


Mundhenk T.N.,HRL Labs LLC | Baron J.,HRL Labs LLC | Matic R.M.,HRL Labs LLC
Optical Engineering | Year: 2012

Passive millimeter wavelength (PMMW) video holds great promise, given its ability to see targets and obstacles through fog, smoke, and rain. However, current imagers produce undesirable complex noise. This can come as a mixture of fast shot (snowlike) noise and a slower-forming circular fixed pattern. Shot noise can be removed by a simple gain style filter. However, this can produce blurring of objects in the scene. To alleviate this, we measure the amount of Bayesian surprise in videos. Bayesian surprise measures feature change in time that is abrupt but cannot be accounted for as shot noise. Surprise is used to attenuate the shot noise filter in locations of high surprise. Since high Bayesian surprise in videos is very salient to observers, this reduces blurring, particularly in places where people visually attend. Fixed pattern noise is removed after the shot noise using a combination of nonuniformity correction and mean image wavelet transformation. The combination allows for online removal of time-varying fixed pattern noise, even when background motion may be absent. It also allows for online adaptation to differing intensities of fixed pattern noise. We also discuss a method for sharpening frames using deconvolution. The fixed pattern and shot noise filters are all efficient, which allows real time video processing of PMMW video. We show several examples of PMMW video with complex noise that is much cleaner as a result of the noise removal. Processed video clearly shows cars, houses, trees, and utility poles at 20 frames per second. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE).

Loading HRL Labs LLC collaborators
Loading HRL Labs LLC collaborators