Morpho Inc.

Japan

Morpho Inc.

Japan
SEARCH FILTERS
Time filter
Source Type

An image processing device for stitching a plurality of input images together so as to generate a panoramic composite image is provided. An imaging control section sets a guide for photographing a photographing subject from different imaging directions as a guide information for obtaining a plurality of input images suitable for the image composition. A guide display setting section displays a photographing guide image based on the guide information. A user rotates a camera while checking the guide image and captures the plurality of input images from different imaging directions. An image compositing section stitches the plurality of input images together so as to generate a composite image.


The present disclosure is to generate a high-quality image by correcting a predetermined correction target image based on a plurality of input images. In an image processing device 3, an image correcting section 160 detects a user tap gesture on a touch panel 250. When the position of the tap gesture is within foreground candidate areas detected by a foreground candidate area detecting section 140, the image correcting section 160 corrects a base image set by a base image setting section 120 in the areas corresponding to the foreground candidate areas.


An object of the present invention is to provide an image processing device and the like that can generate a composite image in a desired focusing condition. In a smartphone 1, an edge detecting section 107 detects an edge as a feature from a plurality of input images taken with different focusing distances, and detects the intensity of the edge as a feature value. A depth estimating section 111 then estimates the depth of a target pixel, which is information representing which of the plurality of input images is in focus at the target pixel, by using the edge intensity detected by the edge detecting section 107. A depth map generating section 113 then generates a depth map based on the estimation results by the depth estimating section 111.


An image processing device is provided. The image processing device includes: a tracking area setting section that sets a tracking area in a frame image of an original motion picture; a color information obtaining section that obtains color information of the tracking area set by the tracking area setting section; a tracking position estimating section that estimates a tracking position with reference to the tracking area with respect to each frame image of the original motion picture by using the color information obtained by the color information obtaining section; an output frame setting section that sets an output frame enclosing the tracking area set by the tracking area setting section and that updates the output frame based on the estimated tracking position; and a lock-on motion picture generating section that generates a motion picture that is based on the output frame.


Patent
Morpho Inc. | Date: 2016-05-18

It is an object to generate a desired composite image in which a motion area of a subject is correctly composited. A difference image between a base target and an aligned swap target is generated (S1021), and an extracted contour in the difference image is determined according to the active contour model (S1022). The inner area of the contour and the outer area of the contour are painted with different colors to be color-coded so as to generate a mask image for alpha blending (S1023). Using the mask image thus generated, the swap target that is aligned with respect to the base target is composited with the base target of the base image by alpha blending (S1024).


An image processing device sets in a frame image generated by an imaging device, a region of a size smaller than the frame image, and corrects a position or a shape of the region according to motion of the imaging device to generate an output frame image. The device comprises an input unit, a motion acquisition unit, a matrix operation unit, and a drawing unit. The input unit implements sequential input of a first frame image and a second frame image. The motion acquisition unit acquires motion data between the first frame image and the second frame image. The matrix operation unit calculates a projection matrix to project the output frame image to the second frame image, from a first matrix including a rolling shutter distortion component, a second matrix including at least one of a parallel translation component in directions perpendicular to an image-shooting direction and a rotation component with respect to the image-shooting direction, and an auxiliary matrix including a motion component not included in the first matrix and the second matrix. The drawing unit generates the output frame image from the second frame image by using the projection matrix. The matrix operation unit comprises a first calculation unit, a second calculation unit, and an auxiliary matrix calculation unit. The first calculation unit calculates the first matrix for the projection matrix by using the motion data. The second calculation unit calculates the second matrix for the projection matrix by using the motion data, the first matrix, and, the second matrix in the past. The auxiliary matrix calculation unit calculates the auxiliary matrix for the projection matrix by using the motion data, the first matrix, and, the auxiliary matrix in the past.


A method and apparatus for partially up/downscaling an image encoded on a macroblock basis. The method and apparatus performs operations of: storing the encoded image; creating map data from bitstream of the encoded image to decode at a least one macroblock of the encoded image, creating a shrunken image of a predetermined size based on resolution of a display device, storing the map data and the shrunken image so as to relate with the encoded image; outputting the shrunken image related with the encoded image to be displayed based on a control request received from an input device; determining at least one macroblock to be decoded based on a display area of the shrunken image; partially decoding the encoded image for the determined macroblock using the map data; and outputting to the display device, the image data of the display area of the partially decoded image.


Thumbnails included in an area designated by a pinch gesture are all specified. If it is not true that the thumbnails included in the designated area are all displayed in the selected style (S2004, No), the thumbnails included in the designated area are all displayed in the selected style (S2005), and the status is set to selected state (S2006). If it is true that the thumbnails included in the designated area are all displayed in the selected style (S2004, Yes), the thumbnails included in the designated area are all displayed in unselected style (S2007), and the status is set to unselected state (S2008). If the gesture is continuously changed to a pinch-out gesture or a pinch-in gesture (S2010, No), the selected area or the unselected area are expanded or reduced depending on the status (S2014, S2015).


An image synthesis apparatus for generating a composite image by synthesizing a second image with a first image, includes an image acquisition unit configured to acquire first group images including the first image and a plurality of different resolution images corresponding to the first image, and second group images including the second image and a plurality of different resolution images corresponding to the second image, a frequency component calculation unit configured to calculate low-frequency components and high-frequency components from the first group images and the second group images, and an image synthesis unit configured to generate the composite image on the basis of the low-frequency components of the first group images and the high-frequency components of the second group images.


An image composition device composites a no-flash image including a subject captured under ambient light and a flash image including the subject captured by emitting flash light to generate a composite image including the subject. The image composition device includes an input unit that enables the no-flash image and the flash image to be input, a correction unit that corrects the no-flash image and the flash image input from the input unit, and a composition unit that composites the no-flash image and the flash image output from the correction unit to generate a composite image. The correction unit corrects a color temperature of the no-flash image.

Loading Morpho Inc. collaborators
Loading Morpho Inc. collaborators