Santa Clara, CA, United States
Santa Clara, CA, United States

Time filter

Source Type

Patent
Pelican Imaging | Date: 2016-10-06

Systems and methods for extended color processing on Pelican array cameras in accordance with embodiments of the invention are disclosed. In one embodiment, a method of generating a high resolution image includes obtaining input images, where a first set of images includes information in a first band of visible wavelengths and a second set of images includes information in a second band of visible wavelengths and non-visible wavelengths, determining an initial estimate by combining the first set of images into a first fused image, combining the second set of images into a second fused image, spatially registering the fused images, denoising the fused images using bilateral filters, normalizing the second fused image in the photometric reference space of the first fused image, combining the fused images, determining a high resolution image that when mapped through a forward imaging transformation matches the input images within at least one predetermined criterion.


A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.


Systems in accordance with embodiments of the invention can perform parallax detection and correction in images captured using array cameras. Due to the different viewpoints of the cameras, parallax results in variations in the position of objects within the captured images of the scene. Methods in accordance with embodiments of the invention provide an accurate account of the pixel disparity due to parallax between the different cameras in the array, so that appropriate scene-dependent geometric shifts can be applied to the pixels of the captured images when performing super-resolution processing. In a number of embodiments, generating depth estimates considers the similarity of pixels in multiple spectral channels. In certain embodiments, generating depth estimates involves generating a confidence map indicating the reliability of depth estimates.


Systems and methods in accordance with embodiments of the invention actively align a lens stack array with an array of focal planes to construct an array camera module. In one embodiment, a method for actively aligning a lens stack array with a sensor that has a focal plane array includes: aligning the lens stack array relative to the sensor in an initial position; varying the spatial relationship between the lens stack array and the sensor; capturing images of a known target that has a region of interest using a plurality of active focal planes at different spatial relationships; scoring the images based on the extent to which the region of interest is focused in the images; selecting a spatial relationship between the lens stack array and the sensor based on a comparison of the scores; and forming an array camera subassembly based on the selected spatial relationship.


Systems and methods in accordance with embodiments of the invention implement modular array cameras using sub-array modules. In one embodiment, an XY sub-array module includes: an XY arrangement of focal planes, where X and Y are each greater than or equal to 1; and an XY arrangement of lens stacks, the XY arrangement of lens stacks being disposed relative to the XY arrangement of focal planes so as to form an XY arrangement of cameras, where each lens stack has a field of view that is shifted with respect to the field-of-views of each other lens stack so that each shift includes a sub-pixel shifted view of the scene; and image data output circuitry that is configured to output image data from the XY sub-array module that can be aggregated with image data from other sub-array modules so that an image of the scene can be constructed.


Systems and methods for implementing array cameras configured to perform super-resolution processing to generate higher resolution super-resolved images using a plurality of captured images and lens stack arrays that can be utilized in array cameras are disclosed. Lens stack arrays in accordance with many embodiments of the invention include lens elements formed on substrates separated by spacers, where the lens elements, substrates and spacers are configured to form a plurality of optical channels, at least one aperture located within each optical channel, at least one spectral filter located within each optical channel, where each spectral filter is configured to pass a specific spectral band of light, and light blocking materials located within the lens stack array to optically isolate the optical channels.


Imager arrays, array camera modules, and array cameras in accordance with embodiments of the invention utilize pixel apertures to control the amount of aliasing present in captured images of a scene. One embodiment includes a plurality of focal planes, control circuitry configured to control the capture of image information by the pixels within the focal planes, and sampling circuitry configured to convert pixel outputs into digital pixel data. In addition, the pixels in the plurality of focal planes include a pixel stack including a microlens and an active area, where light incident on the surface of the microlens is focused onto the active area by the microlens and the active area samples the incident light to capture image information, and the pixel stack defines a pixel area and includes a pixel aperture, where the size of the pixel apertures is smaller than the pixel area.


A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.


Embodiments of the invention provide a camera array imaging architecture that computes depth maps for objects within a scene captured by the cameras, and use a near-field sub-array of cameras to compute depth to near-field objects and a far-field sub-array of cameras to compute depth to far-field objects. In particular, a baseline distance between cameras in the near-field subarray is less than a baseline distance between cameras in the far-field sub-array in order to increase the accuracy of the depth map. Some embodiments provide an illumination near-IR light source for use in computing depth maps.


A camera array, an imaging device and/or a method for capturing image that employ a plurality of imagers fabricated on a substrate is provided. Each imager includes a plurality of pixels. The plurality of imagers include a first imager having a first imaging characteristics and a second imager having a second imaging characteristics. The images generated by the plurality of imagers are processed to obtain an enhanced image compared to images captured by the imagers. Each imager may be associated with an optical element fabricated using a wafer level optics (WLO) technology.

Loading Pelican Imaging collaborators
Loading Pelican Imaging collaborators