Stockholm, Sweden
Stockholm, Sweden

Time filter

Source Type

Bergstrom N.,KTH Royal Institute of Technology | Bergstrom N.,Center for Autonomous Systems | Bjorkman M.,KTH Royal Institute of Technology | Bjorkman M.,Center for Autonomous Systems | And 2 more authors.
IEEE International Conference on Intelligent Robots and Systems | Year: 2011

We propose a method for interactive modeling of objects and object relations based on real-time segmentation of video sequences. In interaction with a human, the robot can perform multi-object segmentation through principled modeling of physical constraints. The key contribution is an efficient multi-labeling framework, that allows object modeling and disambiguation in natural scenes. Object modeling and labeling is done in a real-time segmentation system, to which hypotheses and constraints denoting relations between objects can be added incrementally. Through instructions such as key presses or spoken words, a scene can be segmented in regions corresponding to multiple physical objects. The approach solves some of the difficult problems related to disambiguation of objects merged due to their direct physical contact. Results show that even a limited set of simple interactions with a human operator can substantially improve segmentation results. © 2011 IEEE.


Karayiannidis Y.,Center for Autonomous Systems | Doulgeri Z.,Aristotle University of Thessaloniki
Robotica | Year: 2013

Fast and robust tracking against unknown disturbances is required in many modern complex robotic structures and applications, for which knowledge of the full exact nonlinear system is unreasonable to assume. This paper proposes a regressor-free nonlinear controller of low complexity which ensures prescribed performance position error tracking subject to unknown endogenous and exogenous bounded dynamics assuming that joint position and velocity measurements are available. It is theoretically shown and demonstrated by a simulation study that the proposed controller can guarantee tracking of the desired joint position trajectory with a priori determined accuracy, overshoot and speed of response. Preliminary experimental results to a simplified system are promising for validating the controller to more complex structures. Copyright © Cambridge University Press 2013.


Romero J.,Center for Autonomous Systems | Feix T.,Otto Bock Healthcare Gmeinbh | Kjellstrom H.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

Understanding the spatial dimensionality and temporal context of human hand actions can provide representations for programming grasping actions in robots and inspire design of new robotic and prosthetic hands. The natural representation of human hand motion has high dimensionality. For specific activities such as handling and grasping of objects, the commonly observed hand motions lie on a lower-dimensional non-linear manifold in hand posture space. Although full body human motion is well studied within Computer Vision and Biomechanics, there is very little work on the analysis of hand motion with nonlinear dimensionality reduction techniques. In this paper we use Gaussian Process Latent Variable Models (GPLVMs) to model the lower dimensional manifold of human hand motions during object grasping. We show how the technique can be used to embed high-dimensional grasping actions in a lower-dimensional space suitable for modeling, recognition and mapping. ©2010 IEEE.


Rubio O.J.,Center for Autonomous Systems | Huebner K.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

We study two important problems in the area of robot grasping: i) the methodology and representations for grasp selection on known and unknown objects, and ii) learning from experience for grasping of similar objects. The core part of the paper is the study of different representations necessary for implementing grasping tasks on objects of different complexity. We show how to select a grasp satisfying force-closure, taking into account the parameters of the robot hand and collision-free paths. Our implementation takes also into account efficient computation at different levels of the system regarding representation, description and grasp hypotheses generation. ©2010 IEEE.


Colledanchise M.,Center for Autonomous Systems | Marzinotto A.,Center for Autonomous Systems | Ogren P.,Center for Autonomous Systems
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2014

This paper presents a mathematical framework for performance analysis of Behavior Trees (BTs). BTs are a recent alternative to Finite State Machines (FSMs), for doing modular task switching in robot control architectures. By encoding the switching logic in a tree structure, instead of distributing it in the states of a FSM, modularity and reusability are improved. In this paper, we compute performance measures, such as success/failure probabilities and execution times, for plans encoded and executed by BTs. To do this, we first introduce Stochastic Behavior Trees (SBT), where we assume that the probabilistic performance measures of the basic action controllers are given. We then show how Discrete Time Markov Chains (DTMC) can be used to aggregate these measures from one level of the tree to the next. The recursive structure of the tree then enables us to step by step propagate such estimates from the leaves (basic action controllers) to the root (complete task execution). Finally, we verify our analytical results using massive Monte Carlo simulations, and provide an illustrative example of the results for a complex robotic task. © 2014 IEEE.


Karayiannidis Y.,Center for Autonomous Systems | Smith C.,Center for Autonomous Systems | Vina F.E.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2014

One of the big challenges for robots working outside of traditional industrial settings is the ability to robustly and flexibly grasp and manipulate tools for various tasks. When a tool is interacting with another object during task execution, several problems arise: A tool can be partially or completely occluded from the robot's view, it can slip or shift in the robot's hand - thus, the robot may lose the information about the exact position of the tool in the hand. Thus, there is a need for online calibration and/or recalibration of the tool. In this paper, we present a model-free online tool-tip calibration method that uses force/torque measurements and an adaptive estimation scheme to estimate the point of contact between a tool and the environment. An adaptive force control component guarantees that interaction forces are limited even before the contact point estimate has converged. We also show how to simultaneously estimate the location and normal direction of the surface being touched by the tool-tip as the contact point is estimated. The stability of the the overall scheme and the convergence of the estimated parameters are theoretically proven and the performance is evaluated in experiments on a real robot. © 2014 IEEE.


Rasolzadeh B.,Center for Autonomous Systems | Bjorkman M.,Center for Autonomous Systems | Huebner K.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
International Journal of Robotics Research | Year: 2010

The ability to autonomously acquire new knowledge through interaction with the environment is an important research topic in the field of robotics. The knowledge can only be acquired if suitable perception- action capabilities are present: a robotic system has to be able to detect, attend to and manipulate objects in its surrounding. In this paper, we present the results of our long-term work in the area of vision-based sensing and control. The work on finding, attending, recognizing and manipulating objects in domestic environments is studied. We present a stereo-based vision system framework where aspects of top-down and bottom-up attention as well as foveated attention are put into focus and demonstrate how the system can be utilized for robotic object grasping.


Bohg J.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
Robotics and Autonomous Systems | Year: 2010

This paper presents work on vision based robotic grasping. The proposed method adopts a learning framework where prototypical grasping points are learnt from several examples and then used on novel objects. For representation purposes, we apply the concept of shape context and for learning we use a supervised learning approach in which the classifier is trained with labelled synthetic images. We evaluate and compare the performance of linear and non-linear classifiers. Our results show that a combination of a descriptor based on shape context with a non-linear classification algorithm leads to a stable detection of grasping points for a variety of objects. © 2009 Elsevier B.V. All rights reserved.


Bekiroglu Y.,Center for Autonomous Systems | Huebner K.,Center for Autonomous Systems | Kragic D.,Center for Autonomous Systems
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2011

This paper presents an integration of grasp planning and online grasp stability assessment based on tactile data. We show how the uncertainty in grasp execution posterior to grasp planning can be dealt with using tactile sensing and machine learning techniques. The majority of the state-of-the-art grasp planners demonstrate impressive results in simulation. However, these results are mostly based on perfect scene/object knowledge allowing for analytical measures to be employed. It is questionable how well these measures can be used in realistic scenarios where the information about the object and robot hand may be incomplete and/or uncertain. Thus, tactile and force-torque sensory information is necessary for successful online grasp stability assessment. We show how a grasp planner can be integrated with a probabilistic technique for grasp stability assessment in order to improve the hypotheses about suitable grasps on different types of objects. Experimental evaluation with a three-fingered robot hand equipped with tactile array sensors shows the feasibility and strength of the integrated approach. © 2011 IEEE.


Marzinotto A.,Center for Autonomous Systems | Colledanchise M.,Center for Autonomous Systems | Smith C.,Center for Autonomous Systems | Ogren P.,Center for Autonomous Systems
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2014

This paper presents a unified framework for Behavior Trees (BTs), a plan representation and execution Tool. The available literature lacks The consistency and mathematical rigor required for robotic and control applications. Therefore, we approach This problem in Two steps: first, reviewing The most popular BT literature exposing The aforementioned issues; second, describing our unified BT framework along with equivalence notions between BTs and Controlled Hybrid Dynamical Systems (CHDSs). This paper improves on The existing state of The art as it describes BTs in a more accurate and compact way, while providing insight about Their actual representation capabilities. Lastly, we demonstrate The applicability of our framework To real systems scheduling open-loop actions in a grasping mission That involves a NAO robot and our BT library. © 2014 IEEE.

Loading Center for Autonomous Systems collaborators
Loading Center for Autonomous Systems collaborators