Time filter

Source Type

Wallenberg M.,Linköping University | Felsberg M.,Linköping University | Forssen P.-E.,Linköping University | Dellen B.,Institute Of Robotica I Informatica Industrial Csic Upc
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

Segmentation is an important preprocessing step in many applications. Compared to colour segmentation, fusion of colour and depth greatly improves the segmentation result. Such a fusion is easy to do by stacking measurements in different value dimensions, but there are better ways. In this paper we perform fusion using the channel representation, and demonstrate how a state-of-the-art segmentation algorithm can be modified to use channel values as inputs. We evaluate segmentation results on data collected using the Microsoft Kinect peripheral for Xbox 360, using the superparamagnetic clustering algorithm. Our experiments show that depth gradients are more useful than depth values for segmentation, and that channel coding both colour and depth gradients makes tuned parameter settings generalise better to novel images. © 2011 Springer-Verlag.


Aksoy E.E.,University of Gottingen | Abramov A.,University of Gottingen | Worgotter F.,University of Gottingen | Dellen B.,Max Planck Institute for Dynamics and Self-Organization | Dellen B.,Institute Of Robotica I Informatica Industrial Csic Upc
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2010

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting objectaction relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics. ©2010 IEEE.


Agostini A.,Institute Of Robotica I Informatica Industrial Csic Upc | Torras C.,Institute Of Robotica I Informatica Industrial Csic Upc | Worgotter F.,Bernstein Center for Computational Neuroscience
IJCAI International Joint Conference on Artificial Intelligence | Year: 2011

Human environments are challenging for robots, which need to be trainable by lay people and learn new behaviours rapidly without disrupting much the ongoing activity. A system that integrates AI techniques for planning and learning is here proposed to satisfy these strong demands. The approach rapidly learns planning operators from few action experiences using a competitive strategy where many alternatives of cause-effect explanations are evaluated in parallel, and the most successful ones are used to generate the operators. The success of a cause-effect explanation is evaluated by a probabilistic estimate that compensates the lack of experience, producing more confident estimations and speeding up the learning in relation to other known estimates. The system operates without task interruption by integrating in the planning-learning loop a human teacher that supports the planner in making decisions. All the mechanisms are integrated and synchronized in the robot using a general decision-making framework. The feasibility and scalability of the architecture are evaluated in two different robot platforms: a Stäubli arm, and the humanoid ARMAR III.


Borras J.,Institute Of Robotica I Informatica Industrial Csic Upc | Thomas F.,Institute Of Robotica I Informatica Industrial Csic Upc | Torras C.,Institute Of Robotica I Informatica Industrial Csic Upc
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2010

A 5-SPU robot with collinear universal joints is well suited to handling an axisymmetric tool, since it has 5 controllable DoFs and the remaining one is a free rotation around the tool. The kinematics of such a robot having also coplanar spherical joints has previously been studied as a rigid subassembly of a Stewart-Gough platform, it being denoted a line-plane component. It was shown that this component has 8 assembly modes corresponding to the roots of a bi-quartic polynomial. Here we identify a whole family of these 5-SPU robots having only 4 assembly modes, which are obtained by solving two quadratic equations. This family is defined by a simple proportionality constraint relating the coordinates of the base and platform attachments. A geometric interpretation of the architectural singularities of this type of robots in terms of conics is provided, which facilitates their avoidance at the design stage. Parallel singularities obey also a neat geometric structure, which permits deriving a cell decomposition of configuration space. Two practical features of these quadratically-solvable robots are the large maneuverability within each connected component and the fact that, for a fixed orientation of the tool, the singularity locus reduces to a plane. ©2010 IEEE.


Garrell A.,Institute Of Robotica I Informatica Industrial Csic Upc | Sanfeliu A.,Institute Of Robotica I Informatica Industrial Csic Upc
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

This paper describes the validation process of a simulation model that have been used to explore the new possibilities of interaction when humans are guided by teams of robots that work cooperatively in urban areas. The set of experiments, which have been recorded as video sequences, show a group of people being guided by a team of three people (who play the role of the guide robots). The model used in the simulation process is called Discrete Time Motion model (DTM) described in [7], where the environment is modeled using a set of potential fields, and people's motion is represented through tension functions. The video sequences were recorded in an urban space of 10:000 m2 denominated Barcelona Robot Lab, where people move in the urban space following diverse trajectories. The motion (pose and velocity) of people and robots extracted from the video sequences were compared against the predictions of the DTM model. Finally, we checked the proper functioning of the model by studying the position error differences of the recorded and simulated sequences. ©2010 IEEE.


Garrell A.,Institute Of Robotica I Informatica Industrial Csic Upc | Sanfeliu A.,Institute Of Robotica I Informatica Industrial Csic Upc
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

This article presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. Unlike other methods, we consider situations where individuals can move freely and can escape from the formation, moreover they must be regrouped by multiple mobile robots working cooperatively. The problem is addressed by introducing a "Discrete Time Motion" model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. The guiding mission is carried out in urban areas containing multiple obstacles and building constraints. Furthermore, an analysis of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors. ©2010 IEEE.


Rozo L.,Institute Of Robotica I Informatica Industrial Csic Upc | Jimenez P.,Institute Of Robotica I Informatica Industrial Csic Upc | Torras C.,Institute Of Robotica I Informatica Industrial Csic Upc
IEEE 15th International Conference on Advanced Robotics: New Boundaries for Robotics, ICAR 2011 | Year: 2011

A learning framework with a bidirectional communication channel is proposed, where a human performs several demonstrations of a task using a haptic device (providing him/her with force-torque feedback) while a robot captures these executions using only its force-based perceptive system. Our work departs from the usual approaches to learning by demonstration in that the robot has to execute the task blindly, relying only on force-torque perceptions, and, more essential, we address goal-driven manipulation tasks with multiple solution trajectories, whereas most works tackle tasks that can be learned by just finding a generalization at the trajectory level. To cope with these multiple-solution tasks, in our framework demonstrations are represented by means of a Hidden Markov Model (HMM) and the robot reproduction of the task is performed using a modified version of Gaussian Mixture Regression that incorporates temporal information (GMRa) through the forward variable of the HMM. Also, we exploit the haptic device as a teaching and communication tool in a human-robot interaction context, as an alternative to kinesthetic-based teaching systems. Results show that the robot is able to learn a container-emptying task relying only on force-based perceptions and to achieve the goal from several non-trained initial conditions. © 2011 IEEE.


Agostini A.,Institute Of Robotica I Informatica Industrial Csic Upc | Celaya E.,Institute Of Robotica I Informatica Industrial Csic Upc
IJCAI International Joint Conference on Artificial Intelligence | Year: 2011

In this work we propose an approach for generalization in continuous domain Reinforcement Learning that, instead of using a single function approximator, tries many different function approximators in parallel, each one defined in a different region of the domain. Associated with each approximator is a relevance function that locally quantifies the quality of its approximation, so that, at each input point, the approximator with highest relevance can be selected. The relevance function is defined using parametric estimations of the variance of the q -values and the density of samples in the input space, which are used to quantify the accuracy and the confidence in the approximation, respectively. These parametric estimations are obtained from a probability density distribution represented as a Gaussian Mixture Model embedded in the input-output space of each approximator. In our experiments, the proposed approach required a lesser number of experiences for learning and produced more stable convergence profiles than when using a single function approximator.


Ramisa A.,Institute Of Robotica I Informatica Industrial Csic Upc | Alenya G.,Institute Of Robotica I Informatica Industrial Csic Upc | Moreno-Noguer F.,Institute Of Robotica I Informatica Industrial Csic Upc | Torras C.,Institute Of Robotica I Informatica Industrial Csic Upc
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2012

Detecting grasping points is a key problem in cloth manipulation. Most current approaches follow a multiple re-grasp strategy for this purpose, in which clothes are sequentially grasped from different points until one of them yields to a desired configuration. In this paper, by contrast, we circumvent the need for multiple re-graspings by building a robust detector that identifies the grasping points, generally in one single step, even when clothes are highly wrinkled. © 2012 IEEE.


Rojas N.,Institute Of Robotica I Informatica Industrial Csic Upc | Borras J.,Institute Of Robotica I Informatica Industrial Csic Upc | Thomas F.,Institute Of Robotica I Informatica Industrial Csic Upc
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2012

In most practical implementations of the Gough-Stewart platform, the octahedral form is either taken as it stands or is approximated. The kinematics of this particular instance of the Gough-Stewart platform, commonly known as the octahedral manipulator, has been thoughtfully studied. It is well-known, for example, that its forward kinematics can be solved by computing the roots of an octic polynomial and its singularities have a simple geometric interpretation in terms of the intersection of four planes in a single point. In this paper, using a distance-based formulation, it is shown how these properties can be derived without relying neither on variable eliminations nor trigonometric substitutions. Moreover, thanks to this formulation, a family of platforms kinematically equivalent to the octahedral manipulator is obtained. Herein, two Gough-Stewart parallel platforms are said to be kinematically equivalent if there is a one-to-one correspondence between their squared leg lengths for the same configuration of their moving platforms with respect to their bases. If this condition is satisfied, it can be shown that both platforms have the same assembly modes and their singularities, in the configuration space of the moving platform, are located in the same place. © 2012 IEEE.

Loading Institute Of Robotica I Informatica Industrial Csic Upc collaborators
Loading Institute Of Robotica I Informatica Industrial Csic Upc collaborators