Japan
Japan

Time filter

Source Type

An object of the present invention is to provide an image processing device and the like that can generate a composite image in a desired focusing condition. In a smartphone 1, an edge detecting section 107 detects an edge as a feature from a plurality of input images taken with different focusing distances, and detects the intensity of the edge as a feature value. A depth estimating section 111 then estimates the depth of a target pixel, which is information representing which of the plurality of input images is in focus at the target pixel, by using the edge intensity detected by the edge detecting section 107. A depth map generating section 113 then generates a depth map based on the estimation results by the depth estimating section 111.


An image processing device is provided. The image processing device includes: a tracking area setting section that sets a tracking area in a frame image of an original motion picture; a color information obtaining section that obtains color information of the tracking area set by the tracking area setting section; a tracking position estimating section that estimates a tracking position with reference to the tracking area with respect to each frame image of the original motion picture by using the color information obtained by the color information obtaining section; an output frame setting section that sets an output frame enclosing the tracking area set by the tracking area setting section and that updates the output frame based on the estimated tracking position; and a lock-on motion picture generating section that generates a motion picture that is based on the output frame.


Patent
Morpho Inc. | Date: 2016-05-18

It is an object to generate a desired composite image in which a motion area of a subject is correctly composited. A difference image between a base target and an aligned swap target is generated (S1021), and an extracted contour in the difference image is determined according to the active contour model (S1022). The inner area of the contour and the outer area of the contour are painted with different colors to be color-coded so as to generate a mask image for alpha blending (S1023). Using the mask image thus generated, the swap target that is aligned with respect to the base target is composited with the base target of the base image by alpha blending (S1024).


An image processing device sets in a frame image generated by an imaging device, a region of a size smaller than the frame image, and corrects a position or a shape of the region according to motion of the imaging device to generate an output frame image. The device comprises an input unit, a motion acquisition unit, a matrix operation unit, and a drawing unit. The input unit implements sequential input of a first frame image and a second frame image. The motion acquisition unit acquires motion data between the first frame image and the second frame image. The matrix operation unit calculates a projection matrix to project the output frame image to the second frame image, from a first matrix including a rolling shutter distortion component, a second matrix including at least one of a parallel translation component in directions perpendicular to an image-shooting direction and a rotation component with respect to the image-shooting direction, and an auxiliary matrix including a motion component not included in the first matrix and the second matrix. The drawing unit generates the output frame image from the second frame image by using the projection matrix. The matrix operation unit comprises a first calculation unit, a second calculation unit, and an auxiliary matrix calculation unit. The first calculation unit calculates the first matrix for the projection matrix by using the motion data. The second calculation unit calculates the second matrix for the projection matrix by using the motion data, the first matrix, and, the second matrix in the past. The auxiliary matrix calculation unit calculates the auxiliary matrix for the projection matrix by using the motion data, the first matrix, and, the auxiliary matrix in the past.


A method and apparatus for partially up/downscaling an image encoded on a macroblock basis. The method and apparatus performs operations of: storing the encoded image; creating map data from bitstream of the encoded image to decode at a least one macroblock of the encoded image, creating a shrunken image of a predetermined size based on resolution of a display device, storing the map data and the shrunken image so as to relate with the encoded image; outputting the shrunken image related with the encoded image to be displayed based on a control request received from an input device; determining at least one macroblock to be decoded based on a display area of the shrunken image; partially decoding the encoded image for the determined macroblock using the map data; and outputting to the display device, the image data of the display area of the partially decoded image.


Thumbnails included in an area designated by a pinch gesture are all specified. If it is not true that the thumbnails included in the designated area are all displayed in the selected style (S2004, No), the thumbnails included in the designated area are all displayed in the selected style (S2005), and the status is set to selected state (S2006). If it is true that the thumbnails included in the designated area are all displayed in the selected style (S2004, Yes), the thumbnails included in the designated area are all displayed in unselected style (S2007), and the status is set to unselected state (S2008). If the gesture is continuously changed to a pinch-out gesture or a pinch-in gesture (S2010, No), the selected area or the unselected area are expanded or reduced depending on the status (S2014, S2015).


An image synthesis apparatus for generating a composite image by synthesizing a second image with a first image, includes an image acquisition unit configured to acquire first group images including the first image and a plurality of different resolution images corresponding to the first image, and second group images including the second image and a plurality of different resolution images corresponding to the second image, a frequency component calculation unit configured to calculate low-frequency components and high-frequency components from the first group images and the second group images, and an image synthesis unit configured to generate the composite image on the basis of the low-frequency components of the first group images and the high-frequency components of the second group images.


An image processing device comprising: a recording unit configured to record a previous frame image input before a target frame image that is to be a processing target or an output frame image from the previous frame image; an alignment unit configured to align the target frame image with the previous frame image or with the output frame image from the previous frame image; a correction unit configured to perform a temporal correction process to correct a pixel value of the target frame image by use of a pixel value of the previous frame image or a pixel value of the output frame image from the previous frame image that has been aligned by the alignment unit, with reference to the recording unit; and a generation unit configured to generate an output frame image from the target frame image by use of the target frame image corrected by the correction unit.


An image composition device composites a no-flash image including a subject captured under ambient light and a flash image including the subject captured by emitting flash light to generate a composite image including the subject. The image composition device includes an input unit that enables the no-flash image and the flash image to be input, a correction unit that corrects the no-flash image and the flash image input from the input unit, and a composition unit that composites the no-flash image and the flash image output from the correction unit to generate a composite image. The correction unit corrects a color temperature of the no-flash image.


The present disclosure can obtain a high-quality composite image by determining a suitable boundary when input images are sequentially stitched together to generate the composite image. An image generating apparatus 1 generates a composite image by stitching sequentially input images together. A difference value calculating section 15 calculates difference values by using the pixel values of a reference image and the pixel values of a target image that partly overlap the reference image. The difference values represent the relative relationship between the reference image and the target image. A boundary determining section 16 determines a boundary for stitching the reference image and the target image together by using the difference values calculated by the difference value calculating section 15. Then, an image compositing section 18 generates the composite image by stitching the reference image and the target image together based on the boundary determined by the boundary determining section 16.

Loading Morpho Inc. collaborators
Loading Morpho Inc. collaborators