Entity

Time filter

Source Type


Susperregi L.,Autonomous and Smart Systems Unit | Martinez-Otzeta J.M.,Autonomous and Smart Systems Unit | Ansuategui A.,Autonomous and Smart Systems Unit | Ibarguren A.,Autonomous and Smart Systems Unit | Sierra B.,University of the Basque Country
International Journal of Advanced Robotic Systems | Year: 2013

Detecting and tracking people is a key capability for robots that operate in populated environments. In this paper, we used a multiple sensor fusion approach that combines three kinds of sensors in order to detect people using RGB-D vision, lasers and a thermal sensor mounted on a mobile platform. The Kinect sensor offers a rich data set at a significantly low cost, however, there are some limitations to its use in a mobile platform, mainly that the Kinect algorithms for people detection rely on images captured by a static camera. To cope with these limitations, this work is based on the combination of the Kinect and a Hokuyo laser and a thermopile array sensor. A real-time particle filter system merges the information provided by the sensors and calculates the position of the target, using probabilistic leg and thermal patterns, image features and optical flow to this end. Experimental results carried out with a mobile platform in a Science museum have shown that the combination of different sensory cues increases the reliability of the people following system. © 2013 Susperregi et al. Source


Ansuategui A.,Autonomous and Smart Systems Unit | Arruti A.,University of the Basque Country | Susperregi L.,Autonomous and Smart Systems Unit | Yurramendi Y.,University of the Basque Country | And 3 more authors.
Scientific World Journal | Year: 2014

The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. © 2014 A. Ansuategui et al. Source


Susperregi L.,Autonomous and Smart Systems Unit | Sierra B.,University of the Basque Country | Castrillon M.,University of Las Palmas de Gran Canaria | Lorenzo J.,University of Las Palmas de Gran Canaria | And 2 more authors.
Sensors (Switzerland) | Year: 2013

Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). © 2013 by the authors; licensee MDPI, Basel, Switzerland. Source


Susperregi L.,Autonomous and Smart Systems Unit
Sensors (Basel, Switzerland) | Year: 2013

Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). Source


Susperregi L.,Autonomous and Smart Systems Unit | Jauregi E.,University of the Basque Country | Sierra B.,University of the Basque Country | Martinez-Otzeta J.M.,Autonomous and Smart Systems Unit | And 2 more authors.
ICINCO 2013 - Proceedings of the 10th International Conference on Informatics in Control, Automation and Robotics | Year: 2013

In this paper we propose a novel approach for combining information from low cost multiple sensors for people detection on a mobile robot. Robustly detecting people is a key capability needed for robots that operate in populated environments. Several works show the advantages of fusing data coming from complementary sensors. Kinect sensor offers a rich data set at a significantly low cost, however, there are some limitations using it in a mobile platform, mainly that Kinect relies on images captured by a static camera. To cope with these limitations, this work is based on the fusion of Kinect and thermopile array sensor mounted on top of a mobile platform. We propose the implementation of evolutionary selection of people detection supervised classifiers built using several computer vision transformation. Experimental results carried out with a mobile platform in a manufacturing shop floor show that the percentage of wrong classified using only Kinect is drastically reduced with the classification algorithms and with the combination of the three information sources. Source

Discover hidden collaborations