Time filter

Source Type

Wellington, New Zealand

Weta Digital is a digital visual effects company based in Wellington, New Zealand. It was founded by Peter Jackson, Richard Taylor, and Jamie Selkirk in 1993 to produce the digital special effects for Heavenly Creatures. In 2007 Weta Digital’s Senior Visual Effects Supervisor, Joe Letteri, was also appointed as a Director of the company. Weta Digital has won several Academy Awards and BAFTAs.Weta Digital is part of a number of Peter Jackson co-owned companies in Wellington which includes Weta Workshop, Weta Productions, Weta Collectibles and Park Road Post Production.The company is named after the New Zealand weta, one of the world's largest insects. Wikipedia.

Seol Y.,Weta Digital | O'sullivan C.,Trinity College Dublin | Lee J.,Seoul National University
Proceedings - SCA 2013: 12th ACM SIGGRAPH / Eurographics Symposium on Computer Animation | Year: 2013

We present a novel real-time motion puppetry system that drives the motion of non-human characters using human motion input. We aim to control a variety of creatures whose body structures and motion patterns can differ greatly from a human's. A combination of direct feature mapping and motion coupling enables the generation of natural creature motion, along with intuitive and expressive control for puppetry. First, in the design phase, direct feature mappings and motion classification can be efficiently and intuitively computed given crude motion mimicking as input. Later, during the puppetry phase, the user's body motions are used to control the target character in real-time, using the combination of feature mappings generated from the design phase. We demonstrate the effectiveness of our approach with several examples of natural puppetry, where a variety of non-human creatures are controlled in real-time using human motion input from a commodity motion sensing device. Source

Lewis J.P.,Weta Digital | Anjyo K.-I.,OLM Digital
IEEE Computer Graphics and Applications | Year: 2010

This paper introduces a simple direct manipulation algorithm for the popular blendshape facial animation approach. As is the case for body animation, direct manipulation of blendshape models is an inverse problem: when a single vertex is moved, the system must infer the movement of other points. The key to solving the inverse problem is the observation that the blendshape sliders are a semantic parameterization - the corresponding blendshape targets have clear, interpretable functions. Distance in "slider space'' is easily computed and provides the necessary regularization for the inverse problem: The change in semantic position is minimized subject to interpolating the artist's direct manipulations. We give empirical and mathematical demonstrations that a single direct manipulation edit is often the equivalent of multiple slider edits, but the converse is also true, confirming the principle that both editing modes should be supported. © 2010 IEEE. Source

Baker S.,Microsoft | Scharstein D.,Middlebury College | Lewis J.P.,Weta Digital | Roth S.,TU Darmstadt | And 2 more authors.
International Journal of Computer Vision | Year: 2011

The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http://vision.middlebury.edu/flow/ . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them. © The Author(s) 2010. This article is published with open access at Springerlink.com. Source

Nielsen M.B.,University of Aarhus | Soderstrom A.,Weta Digital | Bridson R.,University of British Columbia
ACM Transactions on Graphics | Year: 2013

Computer animated ocean waves for feature films are typically carefully choreographed to match the vision of the director and to support the telling of the story. The rough shape of these waves is established in the previsualization (previs) stage, where artists use a variety of modeling tools with fast feedback to obtain the desired look. This poses a challenge to the effects artists who must subsequently match the locked-down look of the previs waves with high-quality simulated or synthesized waves, adding the detail necessary for the final shot. We propose a set of automated techniques for synthesizing Fourier-based ocean waves that match a previs input, allowing artists to quickly enhance the input wave animation with additional higherfrequency detail that moves consistently with the coarse waves, tweak the wave shapes to flatten troughs and sharpen peaks if desired (as is characteristic of deep water waves), and compute a physically reasonable velocity field of the water analytically. These properties are demonstrated with several examples, including a previs scene from a visual effects production environment. © 2013 ACM. Source

News Article | March 4, 2015
Site: venturebeat.com

Epic Games announced a partnership with Academy Award-winning visual studio Weta Digital during a 2015 Game Developers Conference (GDC) press event this morning. The two companies have come together for a new virtual-reality experience called Thief in the Shadows, which uses CG assets from the film The Hobbit: The Desolation of Smaug. Epic announced that Thief in the Shadows is available on the GDC expo floor today, running on the Oculus Rift Crescent Bay prototype hardware. This had attendees swarming the company’s booth within minutes. I managed to squeeze my way in to try the demo for myself. The VR experience is set in the Lord of the Rings universe, with viewers taking on the role of a hobbit thief. It began in a massive treasure chamber, one so large that I had to crane my neck fully to see it all. Dimly lit piles of coins shimmered under my feet. I could see even dimmer caves in the distance, set off by massive statues on either side. I had to physically turn around to take it all in. Some of the coin piles began to move, with gold sliding down toward my virtual feet. Smaug, an enormous dragon, pushed his face out of a large pile and began to swim around the coins, Scrooge McDuck style. He began to speak in a thunderous voice, claiming that he could smell a thief among his treasures. Smaug circled me, forcing me to turn around in circles to keep track of his motion. His movements and voice became increasingly aggressive — so much so that I caught myself stepping back as he moved nearby. Smaug moved in after taunting me for a bit, placing one of his massive eyes directly above my body as he scolded me for breaking into his chamber. His face was massive — so big that I had to turn my head all the way up and swing it back and forth to take just to take it in. The level of detail in his closeup was astounding — every scale and tooth was photorealistic. I could almost smell his breath. The dragon runs on the new Nvidia Titan X graphics processor, also announced this morning during Epic Games’ event. This has the demo running at 90 frames per second, making for a smooth experience, which had the fire he breathed on me looking like a special effect straight out of a Hollywood film. The quality of virtual reality experiences is increasing at a rapid rate. This segment of the market is still in its infancy, and we’re just now at the point where creators are joining together to accelerate its development. Nvidia’s GPU, Epic’s Unreal Engine 4, and Oculus’ latest prototype came together to create this virtual world, making for one of the most immersive and impressive VR showcases yet.

Discover hidden collaborations