Time filter

Source Type

Aksoy E.E.,University of Gottingen | Abramov A.,University of Gottingen | Worgotter F.,University of Gottingen | Dellen B.,Max Planck Institute for Dynamics and Self-Organization | Dellen B.,Institute Of Robotica I Informatica Industrial Csic Upc
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2010

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting objectaction relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics. ©2010 IEEE. Source

Wallenberg M.,Linkoping University | Felsberg M.,Linkoping University | Forssen P.-E.,Linkoping University | Dellen B.,Institute Of Robotica I Informatica Industrial Csic Upc
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Segmentation is an important preprocessing step in many applications. Compared to colour segmentation, fusion of colour and depth greatly improves the segmentation result. Such a fusion is easy to do by stacking measurements in different value dimensions, but there are better ways. In this paper we perform fusion using the channel representation, and demonstrate how a state-of-the-art segmentation algorithm can be modified to use channel values as inputs. We evaluate segmentation results on data collected using the Microsoft Kinect peripheral for Xbox 360, using the superparamagnetic clustering algorithm. Our experiments show that depth gradients are more useful than depth values for segmentation, and that channel coding both colour and depth gradients makes tuned parameter settings generalise better to novel images. © 2011 Springer-Verlag. Source

Garrell A.,Institute Of Robotica I Informatica Industrial Csic Upc | Sanfeliu A.,Institute Of Robotica I Informatica Industrial Csic Upc
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

This paper describes the validation process of a simulation model that have been used to explore the new possibilities of interaction when humans are guided by teams of robots that work cooperatively in urban areas. The set of experiments, which have been recorded as video sequences, show a group of people being guided by a team of three people (who play the role of the guide robots). The model used in the simulation process is called Discrete Time Motion model (DTM) described in [7], where the environment is modeled using a set of potential fields, and people's motion is represented through tension functions. The video sequences were recorded in an urban space of 10:000 m2 denominated Barcelona Robot Lab, where people move in the urban space following diverse trajectories. The motion (pose and velocity) of people and robots extracted from the video sequences were compared against the predictions of the DTM model. Finally, we checked the proper functioning of the model by studying the position error differences of the recorded and simulated sequences. ©2010 IEEE. Source

Garrell A.,Institute Of Robotica I Informatica Industrial Csic Upc | Sanfeliu A.,Institute Of Robotica I Informatica Industrial Csic Upc
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

This article presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. Unlike other methods, we consider situations where individuals can move freely and can escape from the formation, moreover they must be regrouped by multiple mobile robots working cooperatively. The problem is addressed by introducing a "Discrete Time Motion" model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. The guiding mission is carried out in urban areas containing multiple obstacles and building constraints. Furthermore, an analysis of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors. ©2010 IEEE. Source

Agostini A.,Institute Of Robotica I Informatica Industrial Csic Upc | Torras C.,Institute Of Robotica I Informatica Industrial Csic Upc | Worgotter F.,Bernstein Center for Computational Neuroscience
IJCAI International Joint Conference on Artificial Intelligence | Year: 2011

Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III. Source

Discover hidden collaborations