Entity

Time filter

Source Type

San Jose, CA, United States

Shih Y.,Massachusetts Institute of Technology | Paris S.,Adobe | Durand F.,Massachusetts Institute of Technology | Freeman W.T.,Massachusetts Institute of Technology
ACM Transactions on Graphics | Year: 2013

We introduce "time hallucination": synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as "night". Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime. Source


Kohli P.,Microsoft | Paris S.,Adobe
Computer Graphics Forum | Year: 2012

Optical flow is a critical component of video editing applications, e.g. for tasks such as object tracking, segmentation, and selection. In this paper, we propose an optical flow algorithm called SimpleFlow whose running times increase sublinearly in the number of pixels. Central to our approach is a probabilistic representation of the motion flow that is computed using only local evidence and without resorting to global optimization. To estimate the flow in image regions where the motion is smooth, we use a sparse set of samples only, thereby avoiding the expensive computation inherent in traditional dense algorithms. We show that our results can be used as is for a variety of video editing tasks. For applications where accuracy is paramount, we use our result to bootstrap a global optimization. This significantly reduces the running times of such methods without sacrificing accuracy. We also demonstrate that the SimpleFlow algorithm can process HD and 4K footage in reasonable times. © 2012 The Author(s). Source


Bai J.,University of California at Berkeley | Agarwala A.,Adobe | Agrawala M.,University of California at Berkeley | Ramamoorthi R.,University of California at Berkeley
ACM Transactions on Graphics | Year: 2012

We present a semi-automated technique for selectively deanimating video to remove the large-scale motions of one or more objects so that other motions are easier to see. The user draws strokes to indicate the regions of the video that should be immobilized, and our algorithm warps the video to remove the large-scale motion of these regions while leaving finer-scale, relative motions intact. However, such warps may introduce unnatural motions in previously motionless areas, such as background regions.We therefore use a graph-cut-based optimization to composite the warped video regions with still frames from the input video; we also optionally loop the output in a seamless manner. Our technique enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs (photos that include looping motions in some regions), and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation. We demonstrate the success of our technique with a number of motion visualizations, cinemagraphs and video editing examples created from a variety of short input videos, as well as visual and numerical comparison to previous techniques. © 2012 ACM 0730-0301/2012/08-ART66. Source


Moon B.,KAIST | Carr N.,Adobe | Yoon S.-E.,KAIST
ACM Transactions on Graphics | Year: 2014

Monte Carlo ray tracing is considered one of the most effective techniques for rendering photo-realistic imagery, but requires a large number of ray samples to produce converged or even visually pleasing images. We develop a novel image-plane adaptive sampling and reconstruction method based on local regression theory. A novel local space estimation process is proposed for employing the local regression, by robustly addressing noisy high-dimensional features. Given the local regression on estimated local space, we provide a novel two-step optimization process for selecting bandwidths of features locally in a data-driven way. Local weighted regression is then applied using the computed bandwidths to produce a smooth image reconstruction with well-preserved details. We derive an error analysis to guide our adaptive sampling process at the local space. We demonstrate that our method produces more accurate and visually pleasing results over the state-of-the-art techniques across a wide range of rendering effects. Our method also allows users to employ an arbitrary set of features, including noisy features, and robustly computes a subset of them by ignoring noisy features and decorrelating them for higher quality. ©2014 ACM 0730-0301/2014/08-ART170 $15.00. Source


Tao M.W.,University of California at Berkeley | Hadap S.,University of California at Berkeley | Malik J.,Adobe | Ramamoorthi R.,University of California at Berkeley
Proceedings of the IEEE International Conference on Computer Vision | Year: 2013

Light-field cameras have recently become available to the consumer market. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth cues from both defocus and correspondence are available simultaneously in a single capture. Previously, defocus could be achieved only through multiple image exposures focused at different depths, while correspondence cues needed multiple exposures at different viewpoints or multiple cameras, moreover, both cues could not easily be obtained together. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining both defocus and correspondence depth cues. We analyze the x-u 2D epipolar image (EPI), where by convention we assume the spatial x coordinate is horizontal and the angular u coordinate is vertical (our final algorithm uses the full 4D EPI). We show that defocus depth cues are obtained by computing the horizontal (spatial) variance after vertical (angular) integration, and correspondence depth cues by computing the vertical (angular) variance. We then show how to combine the two cues into a high quality depth map, suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction. © 2013 IEEE. Source

Discover hidden collaborations