Entity

Time filter

Source Type

Santa Clara, CA, United States

Systems and methods for detecting defective camera arrays, optic arrays and/or sensors are described. One embodiment includes capturing image data using a camera array; dividing the captured images into a plurality of corresponding image regions; identifying the presence of localized defects in any of the cameras by evaluating the image regions in the captured images; and detecting a defective camera array using the image processing system when the number of localized defects in a specific set of image regions exceeds a predetermined threshold, where the specific set of image regions is formed by: a common corresponding image region from at least a subset of the captured images; and any additional image region in a given image that contains at least one pixel located within a predetermined maximum parallax shift distance along an epipolar line from a pixel within said common corresponding image region within the given image.


A variety of optical arrangements and methods of modifying or enhancing the optical characteristics and functionality of these optical arrangements are provided. The optical arrangements being specifically designed to operate with camera arrays that incorporate an imaging device that is formed of a plurality of imagers that each include a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. In many optical arrangements the MTF characteristics of the optics allow for contrast at spatial frequencies that are at least as great as the desired resolution of the high resolution images synthesized by the array camera, and significantly greater than the Nyquist frequency of the pixel pitch of the pixels on the focal plane, which in some cases may be 1.5, 2 or 3 times the Nyquist frequency.


Patent
Pelican Imaging | Date: 2014-03-06

Embodiments of systems and methods for providing an array projector are disclosed. The array projector includes an array of projection components and an image processing system. Each of the projection components projects a lower resolution image onto a common surface area and the overlapping lower resolution image combine to form a higher resolution image. The image processor provides lower resolution image data to each of the projection components in the array. The lower resolution image data is generated using the image processor by applying super resolution algorithms lower resolution image data received by the image processor.


Systems and methods are described for generating restricted depth of field depth maps. In one embodiment, an image processing pipeline application configures a processor to: determine a desired focal plane distance and a range of distances corresponding to a restricted depth of field for an image rendered from a reference viewpoint; generate a restricted depth of field depth map from the reference viewpoint using the set of images captured from different viewpoints, where depth estimation precision is higher for pixels with depth estimates within the range of distances corresponding to the restricted depth of field and lower for pixels with depth estimates outside of the range of distances corresponding to the restricted depth of field; and render a restricted depth of field image from the reference viewpoint using the set of images captured from different viewpoints and the restricted depth of field depth map.


Patent
Pelican Imaging | Date: 2014-03-17

Systems and methods for stereo imaging with camera arrays in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating depth information for an object using two or more array cameras that each include a plurality of imagers includes obtaining a first set of image data captured from a first set of viewpoints, identifying an object in the first set of image data, determining a first depth measurement, determining whether the first depth measurement is above a threshold, and when the depth is above the threshold: obtaining a second set of image data of the same scene from a second set of viewpoints located known distances from one viewpoint in the first set of viewpoints, identifying the object in the second set of image data, and determining a second depth measurement using the first set of image data and the second set of image data.

Discover hidden collaborations