Sarnoff Corporation, with headquarters in West Windsor Township, New Jersey, was a research and development company specializing in vision, video and semiconductor technology. It was named for David Sarnoff, the longtime leader of RCA and NBC.The cornerstone of Sarnoff Corporation's David Sarnoff Research Center in the Princeton vicinity was laid just before the attack on Pearl Harbor in 1941. That facility, later Sarnoff Corporation headquarters, was the site of several historic developments, notably color television, CMOS integrated circuit technology, electron microscopy, and many other important technologies affecting everyday life worldwide. Following 47 years as a central research laboratory for its corporate owner RCA as RCA Laboratories, in 1988 the David Sarnoff Research Center was transitioned to Sarnoff Corporation, a wholly owned subsidiary of SRI International. In January 2011, Sarnoff Corporation was integrated into its parent company, SRI International, and continues to engage in similar research and development activities at the Princeton, New Jersey facility. Wikipedia.
Van Der Wal G.S.,Sarnoff Corporation
Proceedings of SPIE - The International Society for Optical Engineering | Year: 2010
The Sarnoff Acadia® II is a powerful vision processing SoC (System-on-a-Chip) that was specifically developed to support advanced vision applications where system size, weight and/or power are severely constrained. This paper, targeted at vision system developers, presents a detailed technical overview of the Acadia® II, highlighting its architecture, processing capabilities, memory and peripheral interfaces. All major subsystems will be covered, including: video preprocessing, specialized vision processing cores for multi-spectral image fusion, multi-resolution contrast normalization, noise coring, image warping, and motion estimation. Application processing via the MPCore®, an integrated set of four ARM®11 floating point processors with associated peripheral interfaces is presented in detail. The paper will emphasize the programmability of the Acadia® II, while describing its ability to provide state-of-the-art realtime image processing in a small, power optimized package. © 2010 Copyright SPIE - The International Society for Optical Engineering.
Bansal M.,Sarnoff Corporation
Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference | Year: 2010
Localizing blood vessels in eye images is a crucial step in the automated and objective diagnosis of eye diseases. Most previous research has focused on extracting the centerlines of vessels in large field of view images. However, for diagnosing diseases of the optic disk region, like glaucoma, small field of view images have to be analyzed. One needs to identify not only the centerlines, but also vessel widths, which vary widely in these images. We present an automatic technique for localizing vessels in small field of view images using multi-scale matched filters. We also estimate local vessel properties - width and orientation - along the length of each vessel. Furthermore, we explicitly account for highlights on thick vessels - central reflexes - which are ignored in many previous works. Qualitative and quantitative results demonstrate the efficacy of our method - e.g. vessel centers are localized with RMS and median errors of 2.11 and 1 pixels, respectively in 700×700 images.
Seibert C.S.,University of Notre Dame |
Hall D.C.,University of Notre Dame |
Liang D.,University of Notre Dame |
Liang D.,University of California at Santa Barbara |
Shellenbarger Z.A.,Sarnoff Corporation
IEEE Photonics Technology Letters | Year: 2010
We demonstrate the efficacy of oxidation smoothing of sidewall roughness in high-index-contrast AlGaAs heterostructure ridge waveguides via oxygen-enhanced nonselective wet thermal oxidation for reducing scattering loss. Single-mode waveguides of core widths between 1.5 and 2.2 μm are fabricated using both the inward growth of a ∼600-nm sidewall-smoothing native oxide outer cladding and, for comparison, encapsulation of an unoxidized etched ridge with a ∼600-nm deposited silicon oxide cladding layer. On average, measured loss coefficients are reduced by a factor of 2 with the oxidation smoothing process. © 2009 IEEE.
Pantoja M.,Santa Clara University |
Ling N.,Santa Clara University |
Kalva H.,Florida Atlantic University |
Lee J.-B.,Sarnoff Corporation
Multidimensional Systems and Signal Processing | Year: 2015
VC-1 is one of the three video coding standards for Blu-ray DVD, which also includes MPEG-2 and H.264. In this paper, an efficient transcoding algorithm from VC-1 video to H.264 video is discussed, we present a transcoder that addresses I, P, B, and interlaced pictures. The main differences between the two standards are analyzed and a solution for transforming from one to the other is presented. The paper proposes a pixel domain transcoder which exploits the variable size transform used in VC-1 to select the variable block size for motion compensation in H.264. We also discuss transcoding of high profile video features; in particular, adaptive size transform and interlacing. Experimental results show that the pixel domain approach reduces computational complexity by more than 50 % as compared to a cascaded one with negligible drop in peak-signal-to-noise-ratio (PSNR). © 2014, Springer Science+Business Media New York.
Kuthirummal S.,Sarnoff Corporation |
Nagahara H.,Osaka University |
Zhou C.,Columbia University |
Nayar S.K.,Columbia University
IEEE Transactions on Pattern Analysis and Machine Intelligence | Year: 2011
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics. © 2006 IEEE.