Time filter

Source Type

Santa Clara, CA, United States

Chatterjee P.,Pelican Imaging | Milanfar P.,University of California at Santa Cruz
IEEE Transactions on Image Processing | Year: 2012

In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively. © 2011 IEEE. Source

Systems and methods for detecting defective camera arrays, optic arrays and/or sensors are described. One embodiment includes capturing image data using a camera array; dividing the captured images into a plurality of corresponding image regions; identifying the presence of localized defects in any of the cameras by evaluating the image regions in the captured images; and detecting a defective camera array using the image processing system when the number of localized defects in a specific set of image regions exceeds a predetermined threshold, where the specific set of image regions is formed by: a common corresponding image region from at least a subset of the captured images; and any additional image region in a given image that contains at least one pixel located within a predetermined maximum parallax shift distance along an epipolar line from a pixel within said common corresponding image region within the given image.

A variety of optical arrangements and methods of modifying or enhancing the optical characteristics and functionality of these optical arrangements are provided. The optical arrangements being specifically designed to operate with camera arrays that incorporate an imaging device that is formed of a plurality of imagers that each include a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. In many optical arrangements the MTF characteristics of the optics allow for contrast at spatial frequencies that are at least as great as the desired resolution of the high resolution images synthesized by the array camera, and significantly greater than the Nyquist frequency of the pixel pitch of the pixels on the focal plane, which in some cases may be 1.5, 2 or 3 times the Nyquist frequency.

Pelican Imaging | Date: 2014-03-06

Embodiments of systems and methods for providing an array projector are disclosed. The array projector includes an array of projection components and an image processing system. Each of the projection components projects a lower resolution image onto a common surface area and the overlapping lower resolution image combine to form a higher resolution image. The image processor provides lower resolution image data to each of the projection components in the array. The lower resolution image data is generated using the image processor by applying super resolution algorithms lower resolution image data received by the image processor.

Systems and methods are described for generating restricted depth of field depth maps. In one embodiment, an image processing pipeline application configures a processor to: determine a desired focal plane distance and a range of distances corresponding to a restricted depth of field for an image rendered from a reference viewpoint; generate a restricted depth of field depth map from the reference viewpoint using the set of images captured from different viewpoints, where depth estimation precision is higher for pixels with depth estimates within the range of distances corresponding to the restricted depth of field and lower for pixels with depth estimates outside of the range of distances corresponding to the restricted depth of field; and render a restricted depth of field image from the reference viewpoint using the set of images captured from different viewpoints and the restricted depth of field depth map.

Discover hidden collaborations