Time filter

Source Type

Zürich, Switzerland

Rufli M.,ETH Zurich | Alonso-Mora J.,ETH Zurich | Alonso-Mora J.,Disney Research Zurich | Siegwart R.,ETH Zurich
IEEE Transactions on Robotics

This paper addresses decentralized motion planning among a homogeneous set of feedback-controlled, decision-making agents. It introduces the continuous control obstacle (Cn-CO), which describes the set of C n-continuous control sequences (and thus trajectories) that lead to a collision between interacting agents. By selecting a feasible trajectory from Cn-CO's complement, a collision-free motion is obtained. The approach represents an extension to the recipro velocity obstacle (RVO, ORCA) collision-avoidance methods so that trajectory segments verify Cn continuity rather than piecewise linearity. This allows the large class of robots capable of tracking Cn-continuous trajectories to employ it for partial motion planning directly - rather than as a mere tool for collision checking. This paper further establishes that both the original velocity obstacle method and several of its recently developed recipro extensions (which treat specific robot physiologies only) correspond to particular instances of Cn-CO. In addition to the described extension in trajectory continuity, Cn-CO thus represents a unification of existing RVO theory. Finally, the presented method is validated in simulation - and a parameter study reveals under which environmental and control conditions C n-CO with n,>,hbox0 admits significantly improved navigation performance compared with inflated approaches based on ORCA. © 2004-2012 IEEE. Source

Grundhofer A.,Disney Research Zurich
IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

We propose a novel approach to generate a high quality photometric compensated projection which, to our knowledge, is the first one, which does not require a radiometrical pre-calibration of cameras or projectors. This improves the compensation quality using devices which cannot be easily linearized, such as single chip DLP projectors with complex color processing. In addition, the simple workflow significantly simplifies the compensation image generation. Our approach consists of a sparse sampling of the projector's color gamut and a scattered data interpolation to generate the per-pixel mapping from projector to camera colors in real-time. To avoid out-of-gamut artifacts, the input image is automatically scaled locally in an optional off-line optimization step maximizing the achievable luminance and contrast while still preserving smooth input gradients without significant clipping errors. © 2013 IEEE. Source

Taneja A.,Disney Research Zurich | Ballan L.,ETH Zurich | Pollefeys M.,ETH Zurich
IEEE Transactions on Pattern Analysis and Machine Intelligence

We propose a method to detect changes in the geometry of a city using panoramic images captured by a car driving around the city. The proposed method can be used to significantly optimize the process of updating the 3D model of an urban environment that is changing over time, by restricting this process to only those areas where changes are detected. With this application in mind, we designed our algorithm to specifically detect only structural changes in the environment, ignoring any changes in its appearance, and ignoring also all the changes which are not relevant for update purposes such as cars, people etc. The approach also accounts for the challenges involved in a large scale application of change detection, such as inaccuracies in the input geometry, errors in the geo-location data of the images as well as the limited amount of information due to sparse imagery. We evaluated our approach on a small scale setup using high resolution, densely captured images and a large scale setup covering an entire city using instead the more realistic scenario of low resolution, sparsely captured images. A quantitative evaluation was also conducted for the large scale setup consisting of 14,000 images. © 2015 IEEE. Source

Crawled News Article
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

Street art has long been ingrained in modern culture. While graffiti artists like Banksy are popular, with their work sometimes fetching thousands of dollars at an auction, spray painting is still illegal. It’s a labor carried out under the cover of night with stencils on hand. Now, robots are getting into the spray paint game. Researchers from ETH Zurich, Disney Research Zurich, Dartmouth College, and Columbia University have developed a “smart” spray can capable of painting murals all on its own. All the user has to do is wave the spray can over a canvas. “Our system aids the user in tasks that are difficult for humans, especially when lacking artistic training and experience,” the researchers wrote in a paper. “It automatically tracks the position of the spray can relative to the mural and makes decisions regarding the amount of paint to spray, based on an online simulation of the spraying process.” Due to the difficulty of obtaining permission to spray paint a building, the researchers couldn’t test their method out in the field, nor could they test it under the constraint of unpredictable weather conditions. Instead, they painted on paper sheets. “Typically, computationally-assisted painting methods are restricted to the computer,” said study co-author Wojciech Jarosz, of Dartmouth College, in a statement. “In this research, we show that by combining computer graphics and computer vision techniques, we can bring such assistance technology to the physical world even for this very traditional painting medium, creating a somewhat unconventional form of digital fabrication.” The “smart” spray can system consists of two webcams and QR-coded cubes for tracking, and an actuation device, which is attached to the spray can via a 3D-printed mount. The paint commands, transmitted via radio, are sent to the servo-motor that operates the nozzle. An algorithm determines the correct amount of paint to use. “The system performs at haptic rates, which allows the user—informed by a visualization of the image residual—to guide the system interactively to recover low frequency features,” the researchers wrote. A video of the “smart” spray can in action can be watched here. Establish your company as a technology leader! For more than 50 years, the R&D 100 Awards have showcased new products of technological significance. You can join this exclusive community! Learn more.

Crawled News Article
Site: http://news.yahoo.com/science/

If Spider-Man had a robot sidekick, this would be it. A new four-wheeled bot named VertiGo looks like a remote-controlled car that a kid might build. But the little machine can drive vertically, straight up walls. Researchers at Disney Research Zurich worked together with mechanical engineering students at the Swiss Federal Institute of Technology in Zurich (ETH) to design and build the gravity-defying bot. The robot's front wheels are steerable — like the front wheels of an automobile — which lets the person who controls the bot change its direction as it zooms around. But it is VertiGo's two propellers, which can be controlled independently of each other, that enable the bot to scale buildings without falling to the ground. [The 6 Strangest Robots Ever Created] To climb a wall, the bot's rear propeller must be tilted outward behind it in such a way so that the thrust (propulsive force) from the propeller pushes the bot toward the wall. At the same time, the bot's front propeller applies thrust downward, pushing the bot upward and enabling it to go from a horizontal position to a vertical position, according to the researchers who built VertiGo. (You can see this process in action at the 25-second mark in the video above.) It's not clear why Disney decided to build a wall-climbing robot, but in a statement outlining the bot's functionality, the researchers noted that VertiGo's ability to drive on both floors and walls "extends the ability of robots to travel through urban and indoor environments." The researchers also said the robot can keep its footing when traversing rough surfaces, like brick walls. The body, or chassis, of the bot is made of carbon fiber, while its more complex parts — like the wheel- suspension system and the wheels — are made of 3D-printed parts and carbon rods. The chassis also houses the robot's electronic components, which include the computer that allows the person operating VertiGo to control the bot in the same way as a remote-controlled car. The computer receives data from onboard sensors (like accelerometers and gyroscopes), as well as infrared distance sensors that estimate the bot's orientation in space. The computer then uses this data, along with input from the person controlling the bot, to direct the motors that power the bot's propellers and wheels. In other words, the person controlling the bot doesn't have to figure out exactly how to tilt the propellers to get the bot to stay put on the wall; the robot can figure that out for itself. Although the video only shows the robot zooming over the ground and climbing a flat wall, the researchers said the little bot might also be able to drive on the ceiling. So VertiGo might be able to keep up with Spider-Man, should the two ever get together. Copyright 2015 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

Discover hidden collaborations