United States
United States

Time filter

Source Type

Okabe M.,University of Electro - Communications | Okabe M.,Japan Science and Technology Agency | Dobashi Y.,Hokkaido University | Dobashi Y.,Japan Science and Technology Agency | And 3 more authors.
ACM Transactions on Graphics | Year: 2015

We propose a method of three-dimensional (3D) modeling of volumetric fluid phenomena from sparse multi-view images (e.g., only a single-view input or a pair of front- and side-view inputs). The volume determined from such sparse inputs using previous methods appears blurry and unnatural with novel views; however, our method preserves the appearance of novel viewing angles by transferring the appearance information from input images to novel viewing angles. For appearance information, we use histograms of image intensities and steerable coefficients. We formulate the volume modeling as an energy minimization problem with statistical hard constraints, which is solved using an expectation maximization (EM)-like iterative algorithm. Our algorithm begins with a rough estimate of the initial volume modeled from the input images, followed by an iterative process whereby we first render the images of the current volume with novel viewing angles. Then, we modify the rendered images by transferring the appearance information from the input images, and we thereafter model the improved volume based on the modified images. We iterate these operations until the volume converges. We demonstrate our method successfully provides natural-looking volume sequences of fluids (i.e., fire, smoke, explosions, and a water splash) from sparse multiview videos. To create production-ready fluid animations, we further propose a method of rendering and editing fluids using a commercially available fluid simulator. Copyright 2015 ACM.


Baxter A.L.,Georgia Regents University | Watcha M.F.,Baylor College of Medicine | Baxter W.V.,OLM Digital Inc. | Leong T.,Emory University | Wyatt M.M.,Baylor College of Medicine
Pediatrics | Year: 2011

OBJECTIVE: The lack of a widely used, validated measure limits pediatric nausea management. The goal of this study was to create and validate a pictorial scale with regular incremental levels between scores depicting increasing nausea intensity. METHODS: A pictorial nausea scale of 0 to 10 with 6 faces (the Baxter Retching Faces [BARF] scale) was developed in 3 stages. The BARF scale was validated in emergency department patients with vomiting and in healthy patients undergoing day surgery procedures. Patients were presented with visual analog scales for nausea and pain, the pictorial Faces Pain Scale-Revised, and the BARF scale. Patients receiving opioid analgesics or antiemetic agents had their pain and nausea assessed before and 30 minutes after therapy. Spearman's ρ correlation coefficients were calculated. A Wilcoxon matched-pair rank test compared pain and nausea scores before and after antiemetic therapy. RESULTS: Thirty oncology patients and 15 nurses participated in the development of the scale, and 127 patients (52, emergency department; 75, day surgery) ages 7 to 18 years participated in the validation. The Spearman ρ correlation coefficient of the first paired BARF and visual analog scale for nausea scores was 0.93. Visual analog scales for nausea and BARF scores (P = .20) were significantly higher in patients requiring antiemetic agents and decreased significantly after treatment, while posttreatment pain scores (P = .47) for patients receiving only antiemetic agents did not. CONCLUSIONS: We describe the development of a pictorial scale with beginning evidence of construct validity for a self-report assessment of the severity of pediatric nausea. The scale had convergent and discriminant validity, along with an ability to detect change after treatment. Copyright © 2011 by the American Academy of Pediatrics.


Kawamoto S.,Japan Advanced Institute of Science and Technology | Kawamoto S.,Japan Advanced Telecommunications Research Institute International | Yotsukura T.,OLM Digital Inc. | Nakamura S.,Nara Institute of Science and Technology
APSIPA ASC 2011 - Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2011 | Year: 2011

This paper gives an overview of our lip-synch animation production framework with practical tools for making 3D animation efficiently based on pre-scoring. Our framework is so simple and easy to use that it can be applied to construct various systems: a management tool for making lip-synch animation, a batch processing tool for mass production, Autodesk Maya plug-in software for practical workplaces, and amusement systems. We also demonstrate practicality of our framework through several practical applications. Our framework worked well in the production at practical workplaces of cartoon animations.


Kawamoto S.-I.,Japan Advanced Institute of Science and Technology | Kawamoto S.-I.,Japan National Institute of Information and Communications Technology | Yotsukura T.,OLM Digital Inc. | Nakamura S.,Japan National Institute of Information and Communications Technology | Morishima S.,Waseda University
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

The paper describes voice assignment techniques for synchronized scenario speech output in an instant casting movie system that enables anyone to be a movie star using his or her own voice and face. Two prototype systems were implemented, and both systems worked well for various participants, ranging from children to the elderly. © 2011 Springer-Verlag.


Ogaki S.,OLM Digital Inc.
SIGGRAPH Asia 2015 RDVG in the Video Game Industry, SA 2015 | Year: 2015

Recently our studio started using Arnold renderer to create cinematics for games. It was not an easy task for artists to optimize shader parameters for a modern renderer because they were not familiar with it. In this paper we describe the challenges we had and how our shaders were developed to tackle this situation.


Anjyo K.,OLM Digital Inc. | Ochiai H.,Kyushu University
Synthesis Lectures on Computer Graphics and Animation | Year: 2014

This synthesis lecture presents an intuitive introduction to the mathematics of motion and deformation in computer graphics. Starting with familiar concepts in graphics, such as Euler angles, quaternions, and affine transformations, we illustrate that a mathematical theory behind these concepts enables us to develop the techniques for efficient/effective creation of computer animation. This book, therefore, serves as a good guidepost to mathematics (differential geometry and Lie theory) for students of geometric modeling and animation in computer graphics. Experienced developers and researchers will also benefit from this book, since it gives a comprehensive overview of mathematical approaches that are particularly useful in character modeling, deformation, and animation. Copyright © 2014 by Morgan & Claypool.


Kuwahara D.,Waseda University | Maejima A.,Waseda University | Maejima A.,OLM Digital Inc. | Morishima S.,Waseda Research Institute for Science and Engineering
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

Facial aging and rejuvenation simulation is a challenging topic because keeping personal characteristics in every age is difficult problem. In this demonstration, we simulate a facial aging/rejuvenating only from a single photo. Our system alters an input face image to aged face by reconstructing every facial component with face database for target age. An appropriate facial components image are selected by a special similarity measurement between current age and target age to keep personal characteristics asmuch as possible. Our systemsuccessfully generated aged/ rejuvenated faces with age-related features such as spots, wrinkles, and sagging while keeping personal characteristics throughout all ages. © Springer International Publishing Switzerland 2015.


Vi C.T.,University of Bristol | Takashima K.,Tohoku University | Yokoyama H.,Tohoku University | Liu G.,OLM Digital Inc. | And 3 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2013

We propose D-FLIP, a novel algorithm that dynamically displays a set of digital photos using different principles for organizing them. A variety of requirements for photo arrangements can be flexibly replaced or added through the interaction and the results are continuously and dynamically displayed. D-FLIP uses an approach based on combinatorial optimization and emergent computation, where geometric parameters such as location, size, and photo angle are considered to be functions of time; dynamically determined by local relationships among adjacent photos at every time instance. As a consequence, the global layout of all photos is automatically varied. We first present examples of photograph behaviors that demonstrate the algorithm and then investigate users' task engagement using EEG in the context of story preparation and telling. The result shows that D-FLIP requires less task engagement and mental efforts in order to support storytelling. © Springer International Publishing 2013.


Lopez-Moreno J.,University of Zaragoza | Jimenez J.,University of Zaragoza | Hadap S.,Adobe Systems | Reinhard E.,University of Bristol | And 2 more authors.
NPAR Symposium on Non-Photorealistic Animation and Rendering | Year: 2010

Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to creatively relight images for the purpose of generating non-photorealistic renditions that would be difficult to achieve with existing methods. Our realtime implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with four different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study. © 2010 ACM.


Lopez-Moreno J.,University of Zaragoza | Jimenez J.,University of Zaragoza | Hadap S.,Adobe Systems | Anjyo K.,OLM Digital Inc. | And 2 more authors.
Computers and Graphics (Pergamon) | Year: 2011

Recent works in image editing are opening up new possibilities to manipulate and enhance input images. Within this context, we leverage well-known characteristics of human perception along with a simple depth approximation algorithm to generate non-photorealistic renditions that would be difficult to achieve with existing methods. Once a perceptually plausible depth map is obtained from the input image, we show how simple algorithms yield powerful new depictions of such an image. Additionally, we show how artistic manipulation of depth maps can be used to create novel non-photorealistic versions, for which we provide the user with an intuitive interface. Our real-time implementation on graphics hardware allows the user to efficiently explore artistic possibilities for each image. We show results produced with six different styles proving the versatility of our approach, and validate our assumptions and simplifications by means of a user study. © 2010 Elsevier Ltd. All rights reserved.

Loading OLM Digital Inc. collaborators
Loading OLM Digital Inc. collaborators