Disney Research

Pittsburgh, PA, United States

Disney Research

Pittsburgh, PA, United States

Time filter

Source Type

News Article | May 16, 2017
Site: www.eurekalert.org

New method makes it easier to reuse film assets for games and virtual reality Cinema-quality animations and virtual reality graphics that need to be rendered in real-time are often mutually exclusive categories, but Disney Research has developed a new process that transforms high-resolution animated content into a novel video format to enable immersive viewing. The end-to-end solution the researchers devised will make it easier to repurpose animated film assets for use in video games and location-based VR venues. Viewers wearing head-mounted displays can interact with movie animations in a new way, based on the position and orientation of their heads. "This new solution promises huge savings on the most costly aspects of interactive media production," said Professor Kenny Mitchell, senior research scientist. The researchers will present their real-time rendering method May 16 at the Graphics Interface 2017 conference in Edmonton, Alberta. "We've seen a resurgence in interest in virtual reality in recent years as companies have released a number of head-mounted displays for consumers," said Professor Markus Gross, vice president at Disney Research. "The subsequent demand for VR and other immersive content is driving innovations such as this ground-breaking set of methods for reusing rendered animated films." Virtual reality scenes must be rendered in real-time and that performance requirement means using animations that are less complex than the highly detailed animations typical of feature films. That means when artists produce an interactive experience tied-in to the film or a related video game, they have to convert the film animations into a lower-quality form compatible with real-time rendering or game engines. That process is both laborious and expensive. Mitchell and his colleagues opt instead for an approach that relies on automated pre-processing: The 3D scenes are rendered from the perspective of a number of camera positions calculated to provide the best viewpoints for all of the surfaces in the scene with as few cameras as possible and encode it all in a modular video format. This content can then be rendered in real-time from an arbitrary point of view, allowing for motion parallax, head tilting and rotations. "This process enables consumption of immersive pre-rendered video in six degrees of freedom using a head-mounted display," said Babis Koniaris, a post-doctoral associate on the team. "It can also be used for rendering film-quality visuals for video games." In addition to Mitchell and Koniaris, the research team included Maggie Kosek and David Sinclair. The research was partly supported by the Innovate UK project #102684, titled OSCIR. Combining creativity and innovation, this research continues Disney's rich legacy of leveraging technology to enhance the tools and systems of tomorrow. For more information on the process, including a video showing example scenes, visit the project web site at http://www. . Disney Research is a network of research laboratories supporting The Walt Disney Company. Its purpose is to pursue scientific and technological innovation to advance the company's broad media and entertainment efforts. Vice President Markus Gross manages Disney Research facilities in Los Angeles, Pittsburgh and Zürich, and works closely with the Pixar and ILM research groups in the San Francisco Bay Area.  Research topics include computer graphics, animation, video processing, computer vision, robotics, wireless & mobile computing, human-computer interaction, displays, behavioral economics, and machine learning.


The end-to-end solution the researchers devised will make it easier to repurpose animated film assets for use in video games and location-based VR venues. Viewers wearing head-mounted displays can interact with movie animations in a new way, based on the position and orientation of their heads. "This new solution promises huge savings on the most costly aspects of interactive media production," said Professor Kenny Mitchell, senior research scientist. The researchers will present their real-time rendering method May 16 at the Graphics Interface 2017 conference in Edmonton, Alberta. "We've seen a resurgence in interest in virtual reality in recent years as companies have released a number of head-mounted displays for consumers," said Professor Markus Gross, vice president at Disney Research. "The subsequent demand for VR and other immersive content is driving innovations such as this ground-breaking set of methods for reusing rendered animated films." Virtual reality scenes must be rendered in real-time and that performance requirement means using animations that are less complex than the highly detailed animations typical of feature films. That means when artists produce an interactive experience tied-in to the film or a related video game, they have to convert the film animations into a lower-quality form compatible with real-time rendering or game engines. That process is both laborious and expensive. Mitchell and his colleagues opt instead for an approach that relies on automated pre-processing: The 3D scenes are rendered from the perspective of a number of camera positions calculated to provide the best viewpoints for all of the surfaces in the scene with as few cameras as possible and encode it all in a modular video format. This content can then be rendered in real-time from an arbitrary point of view, allowing for motion parallax, head tilting and rotations. "This process enables consumption of immersive pre-rendered video in six degrees of freedom using a head-mounted display," said Babis Koniaris, a post-doctoral associate on the team. "It can also be used for rendering film-quality visuals for video games." In addition to Mitchell and Koniaris, the research team included Maggie Kosek and David Sinclair. The research was partly supported by the Innovate UK project #102684, titled OSCIR. Combining creativity and innovation, this research continues Disney's rich legacy of leveraging technology to enhance the tools and systems of tomorrow. Explore further: Using virtual reality to catch a real ball


CHICAGO--(BUSINESS WIRE)--SIGGRAPH 2017, the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques, announces the acceptance of over 125 technical papers which will be presented during this year’s conference. SIGGRAPH 2017 will mark the 44th International Conference and Exhibition on Computer Graphics and Interactive Techniques, and will be held 30 July–3 August 2017 in Los Angeles. Submissions to the Technical Papers program are received from around the world, and feature high-quality, never-before-seen scholarly work. Those who submit technical papers are held to extremely high standards in order to qualify. SIGGRAPH 2017 accepted 127 juried technical papers (out of 439 submissions) for this year’s showcase, an acceptance rate of 28 percent. Forty papers from ACM Transactions on Graphics (TOG), the foremost peer-review journal in the graphics world, will also be presented. As per SIGGRAPH tradition, the papers were chosen by a highly qualified peer jury comprised of members from academia, alongside a number of field experts. For more information on the Technical Papers program and this year’s selections visit: s2017.SIGGRAPH.org/technical-papers. Or, watch the SIGGRAPH 2017 Technical Papers Preview Trailer on YouTube. “Among the trends we noticed this year was that research in core topics, such as geometry processing or fluid simulation, continues while the field itself broadens and matures,” SIGGRAPH 2017 Technical Papers Program Chair Marie-Paule Cani said. “The 14 accepted papers on fabrication now tackle the creation of animated objects as well as of static structures. Machine learning methods are being applied to perception and extended to many content synthesis applications. And topics such as sound processing and synthesis, along with computational cameras and displays, open novel and exciting new directions.” Of the juried papers, the percentage breakdown based on topic area is as follows: 30% modeling, 25% animation and simulation, 25% imaging, 10% rendering; 4% perception, 3% sound, and 3% computational cameras and displays. Clebsch maps encode vector fields, such as those coming from fluid simulations, in the form of a function that encapsulates information about the field in an easily accessible manner. For example, vortex lines and tubes can be found by iso-contouring. This paper provides an algorithm for finding such maps. Authors: Andre Pradhana Tampubolon, University of California, Los Angeles; Theodore Gast, University of California, Los Angeles; Gergely Klar, DreamWorks Animation; Chuyuan Fu, University of California, Los Angeles; Joseph Teran, Walt Disney Animation Studios, Disney Research, University of California, Los Angeles; Chenfanfu Jiang, University of California, Los Angeles; and, Ken Museth, DreamWorks Animation This multi-species model for simulation of gravity-driven landslides and debris flows with porous sand and water interactions uses the material point method and mixture theory to describe individual phases coupled through a momentum exchange term. Authors: Richard Zhang, University of California, Berkeley; Jun-Yan Zhu, University of California, Berkeley; Phillip Isola, University of California, Berkeley; Xinyang Geng, University of California, Berkeley; Angela S. Lin, University of California, Berkeley; Yu Tianhe, University of California, Berkeley; and, Alexei A. Efros University of California, Berkeley This paper proposes a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization. The CNN propagates user edits by fusing low-level cues with high-level semantic information learned from large-scale data. Authors: Kfir Aberman, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Oren Katzir, Tel Aviv University, Advanced Innovation Center for Future Visual Entertainment; Qiang Zhou, Shandong University; Zegang Luo, Shandong University; Andrei Sharf, Advanced Innovation Center for Future Visual Entertainment, Ben-Gurion University of the Negev; Chen Greif, The University of British Columbia; Baoquan Chen, Shandong University; and, Daniel Cohen-Or, Tel-Aviv University This paper presents a 3D acquisition and reconstruction method based on Archimedes submerged-volume equality. It employs fluid displacement as the shape sensor. The liquid has no line-of-sight. It penetrates cavities and hidden parts, as well as transparent and glossy materials, thus bypassing the visibility and optical limitations of scanning devices. Authors: Desai Chen, Massachusetts Institute of Technology; David Levin, University of Toronto; Wojciech Matusik, Massachusetts Institute of Technology; and, Danny Kaufman, Adobe Research This paper presents a simulation-driven optimization framework that, for the first time, automates the design of highly dynamic mechanisms. The key contributions are a method for identifying fabricated material properties for efficient predictive simulation, a dynamics-aware coarsening technique for finite-element analysis and a material-aware impact response model. Registration is now open for SIGGRAPH 2017. To view badge levels and pricing, visit the conference website. Early registration savings end 9 June 2017. The annual SIGGRAPH conference is a five-day interdisciplinary educational experience in the latest computer graphics and interactive techniques, including a three-day commercial exhibition that attracts hundreds of companies from around the world. The conference also hosts the international SIGGRAPH Computer Animation Festival, showcasing works from the world's most innovative and accomplished digital film and video creators. Juried and curated content includes outstanding achievements in time-based art, scientific visualization, visual effects, real-time graphics, and narrative shorts. SIGGRAPH 2017 will take place from 30 July–3 August 2017 in Los Angeles. Visit the SIGGRAPH 2017 website or follow SIGGRAPH on Facebook, Twitter, YouTube, or Instagram for more detailed information. The ACM Special Interest Group on Computer Graphics and Interactive Techniques is an interdisciplinary community interested in research, technology, and applications in computer graphics and interactive techniques. Members include researchers, developers, and users from the technical, academic, business, and art communities. ACM SIGGRAPH enriches the computer graphics and interactive techniques community year-round through its conferences, global network of professional and student chapters, publications, and educational activities. ACM, the Association for Computing Machinery, is the world's largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for lifelong learning, career development, and professional networking.


News Article | April 24, 2017
Site: phys.org

When a person has a bad hair day, that's unfortunate. When a virtual character has bad hair, an entire animation video or film can look unrealistic. A new innovative method developed by Disney Research makes it possible to realistically simulate hair by observing real hair in motion.


Most of the devices and sensors connecting to the Internet of Things (IoT) rely on transmitting radio waves to communicate, which requires power, which means batteries if mains power isn't an option. A team at Disney Research is looking at harnessing a technique called ultra-wideband (UWB) ambient backscatter, which would allow devices to piggyback their communications on the multitude of FM and cellular signals already in the air. "As we move towards connecting the next billion wireless devices to the internet, the use of batteries to power these devices will become unworkable," explains Markus Gross, vice president at Disney Research. "UWB ambient backscatter systems, which potentially could be deployed in any metropolitan area, hold great potential for solving this dilemma." Ambient backscatter techniques basically utilize the ever-present cloud of TV and cellular signals already in the air to either power small transistors or to piggyback data transmissions. This significantly cuts the power requirements of such sensors, by potentially allowing them to communicate without transmitting their own radio waves. Such technology has been trialled several times in recent years, from developing advertising posters that could piggyback FM signals in the air and send ads to nearby devices, to powering small sensors without any external battery power. The new innovation developed by the Disney Research Wireless Systems group allows a single device to backscatter a multitude of available ambient sources. Where prior devices were calibrated to feed or piggyback off a single specific FM or cellular signal, this new UWB approach leverages all broadcast signals in the 80 MHz to 900 MHz range, including digital TVs, FM radios and cellular networks, resulting in a greater signal-to-noise ratio and extending range. The new system requires a single reader hub to receive and decode the sensor data carried on the backscatter signals, but realistically that would mean a variety of backscatter-based sensors could easily be deployed in an office or home environment that would communicate with one central powered source. The team was able to demonstrate communication from node to reader over 22 m (72 ft) when using ambient signals from broadcast towers, and over 50 m (164 ft) with data rates of up to 1 kbps by simultaneously harnessing 17 ambient signal sources. Future prospects for this technology could allow inert, unpowered objects to be embedded with communicative sensors, such as a bus stop pole that holds live timetable information, a t-shirt that communicates heart rate information to its wearer, or even a smartphone that could transmit text messages after its battery has died.


News Article | April 25, 2017
Site: www.sciencenewsdaily.org

In a fight against the type of "offensive or clearly misleading" results that make up about 0.25-percent of daily search traffic, Google has outlined new efforts to stymie the spread of fake news and other low-quality content like unexpected offensive materials, hoaxes and baseless conspiracy theories... Continue Reading Google applies the brakes to fake news Category: Computers Tags: Google Media Search Search Engines Related Articles: Google makes objects in videos searchable Top Google search Easter eggs, from Do a Barrel Roll to Kevin Bacon Interactive intent modelling gives SciNet the edge over other search engines Disney Research software makes mechanizing characters easy DIVAS multimedia search engine finds content using digital ‘fingerprints’ Google on Tuesday announced changes to how it delivers and ranks internet searches, the latest effort by the tech giant to weed out "fake news" and offensive content. In a fight against the type of "offensive or clearly misleading" results that make up about 0.25-percent of daily search traffic, Google has outlined new efforts to stymie the spread ... Google has sprinkled some new ingredients into its search engine in an effort to prevent bogus information and offensive suggestions from souring its results. Google has already done plenty to combat fake news but as the world’s preeminent search engine, its work is never really done. As Ben Gomes, VP of Engineering, explained in a blog post ... Apple's retail stores have long had a social side. You might not visit just to hang out, but the combination of free workshops and an abundance of connected devices gives you ... The term fake news has been thrown around a lot over the past few months. It basically refers to the use of online mediums to spread stories that may not be entirely true. Facebook has been ... Google's quest to fight fake news isn't stopping with identifying bogus stories and an emphasis on fact-checking. The internet giant is rolling out changes to its search results ...


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 4.57M | Year: 2014

EPSRC Centre for Doctoral Training in Digital Entertainment University of Bath and Bournemouth University The Centre for Digital Entertainment (CDE) supports innovative research projects in digital media for the games, animation, visual effects, simulation, cultural and healthcare industries. Being an Industrial Doctorate Centre, CDEs students spend one year being trained at the university and then complete three years of research embedded in a company. To reflect the practical nature of their research they submit for an Engineering Doctorate degree. Digital media companies are major contributors to the UK economy. They are highly-respected internationally and find their services in great demand. To meet this demand they need to employ people with the highest technical skills and the imagination to use those skills to a practical end. The sector has become so successful that the shortage of such people now constrains them from expanding further. Our Doctoral Training Centre is already addressing that and has become the national focus for this kind of training. We do this by combining core taught material with an exciting and unusual range of activities designed to challenge and extend the students knowledge beyond the usual boundaries. By working closely with companies we can offer practical challenges which really push the limits of what can be done with digital media and devices, and by the people using them. We work with many companies and 40-50 students at any one time. As a result we are able to support the group in ways which would not be possible for individual students. We can place several students in one company, we can send teams to compete in programming competitions, and we can send groups to international training sessions. This proposal is to extend and expand this successful Centre. Major enhancements will include use of internationally leading industry experts to teach Master Classes, closer cooperation between company and university researchers, business training led by businesses and options for international placements in an international industry. We will replace the entire first year teaching with a Digital Media programme specifically aimed at these students as a group. The graduates from this Centre will be the technical leaders of the next generation revolution in this fast-moving, demanding and exciting industry.


Raptis M.,Disney Research | Sigal L.,Disney Research
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2013

In this paper, we develop a new model for recognizing human actions. An action is modeled as a very sparse sequence of temporally local discriminative key frames - collections of partial key-poses of the actor(s), depicting key states in the action sequence. We cast the learning of key frames in a max-margin discriminative framework, where we treat key frames as latent variables. This allows us to (jointly) learn a set of most discriminative key frames while also learning the local temporal context between them. Key frames are encoded using a spatially-localizable pose let-like representation with HoG and BoW components learned from weak annotations, we rely on structured SVM formulation to align our components and mine for hard negatives to boost localization performance. This results in a model that supports spatio-temporal localization and is insensitive to dropped frames or partial observations. We show classification performance that is competitive with the state of the art on the benchmark UT-Interaction dataset and illustrate that our model outperforms prior methods in an on-line streaming setting. © 2013 IEEE.


Smolic A.,Disney Research
Pattern Recognition | Year: 2011

This paper gives an end-to-end overview of 3D video and free viewpoint video, which can be regarded as advanced functionalities that expand the capabilities of a 2D video. Free viewpoint video can be understood as the functionality to freely navigate within real world visual scenes, as it is known for instance from virtual worlds in computer graphics. 3D video shall be understood as the functionality that provides the user with a 3D depth impression of the observed scene, which is also known as stereo video. In that sense as functionalities, 3D video and free viewpoint video are not mutually exclusive but can very well be combined in a single system. Research in this area combines computer graphics, computer vision and visual communications. It spans the whole media processing chain from capture to display and the design of systems has to take all parts into account, which is outlined in different sections of this paper giving an end-to-end view and mapping of this broad area. The conclusion is that the necessary technology including standard media formats for 3D video and free viewpoint video is available or will be available in the future, and that there is a clear demand from industry and user for such advanced types of visual media. As a consequence we are witnessing these days how such technology enters our everyday life © 2010 Elsevier Ltd. All rights reserved.


Zheng Y.,Disney Research
IEEE Transactions on Robotics | Year: 2013

This paper presents an efficient algorithm to compute the minimum of the largest wrenches that a grasp can resist over all wrench directions with limited contact forces, which equals the minimum distance from the origin of the wrench space to the boundary of a grasp wrench set. This value has been used as an important grasp quality measure in optimal grasp planning for over two decades, but there has been no efficient way to compute it until now. The proposed algorithm starts with a polytope containing the origin in the grasp wrench set and iteratively grows it such that the minimum distance from the origin to the boundary of the polytope quickly converges to the aforementioned value. The superior efficiency and accuracy of this algorithm over the previous methods have been verified through theoretical and numerical comparisons. © 2004-2012 IEEE.

Loading Disney Research collaborators
Loading Disney Research collaborators