Time filter

Source Type

Delfin J.,Robotics and Advanced Manufacturing Group | Becerra H.M.,Research Center en Matematicas | Arechavaleta G.,Robotics and Advanced Manufacturing Group
International Journal of Control

In this paper, we address the problem of humanoid locomotion guided from information of a monocular camera. The goal of the robot is to reach a desired location defined in terms of a target image, i.e., a positioning task. The proposed approach allows us to introduce a desired time to complete the positioning task, which is advantageous in contrast to the classical exponential convergence. In particular, finite-time convergence is achieved while generating smooth robot velocities and considering the omnidirectional waking capability of the robot. In addition, we propose a hierarchical task-based control scheme, which can simultaneously handle the visual positioning and the obstacle avoidance tasks without affecting the desired time of convergence. The controller is able to activate or inactivate the obstacle avoidance task without generating discontinuous velocity references while the humanoid is walking. Stability of the closed loop for the two task-based control is demonstrated theoretically even during the transitions between the tasks. The proposed approach is generic in the sense that different visual control schemes are supported. We evaluate a homography-based visual servoing for position-based and image-based modalities, as well as for eye-in-hand and eye-to-hand configurations. The experimental evaluation is performed with the humanoid robot NAO. © 2016 Taylor & Francis Source

Delfin J.,Robotics and Advanced Manufacturing Group | Becerra H.M.,Research Center en Matematicas | Arechavaleta G.,Robotics and Advanced Manufacturing Group
IEEE-RAS International Conference on Humanoid Robots

In this paper, we propose an approach of following a visual path for humanoid navigation. The problem consists in computing appropriate robot velocities for the humanoid walking task from the visual data shared between the current robot view and a set of target images. Two types of visual controllers are evaluated: a position-based scheme and an image-based scheme. Both of them rely on the estimation of the homography model even for non-planar scenes. We assume that the sequence of target images is given and we focus on the controllers performance. Because classical visual path following controllers generate discontinuous robot velocities, we propose a generic controller (applicable for different types of visual feedback) to alleviate this issue, which is a main contribution of the paper. The stability of such controller is addressed theoretically and verified through experiments with a NAO humanoid robot. © 2014 IEEE. Source

Sanchez-Escobedo D.,Robotics and Advanced Manufacturing Group | Castelan M.,Robotics and Advanced Manufacturing Group
Pattern Recognition Letters

This paper addresses the problem of linearly approximating 3D shape from intensities in the context of facial analysis. In other words, given a frontal pose grayscale input face, the direct estimation of its 3D structure is sought through a regression matrix. Approaches falling into this category generally assume that both 2D and 3D features are defined under Cartesian schemes, which is not optimal for the task of novel view synthesis. The current article aims to overcome this issue by exploiting the 3D structure of faces through cylindrical coordinates, aided by the partial least squares regression. In the context of facial shape analysis, partial least squares builds a set of basis faces, for both grayscale and 3D shape spaces, seeking for maximizing shared covariance between projections of the data along the basis faces. Experimental tests show how the cylindrical representations are suitable for the purposes of linear regression, resulting in a benefit for the generation of novel facial views, showing a potential use in model based face identification. © 2012 Elsevier B.V. All rights reserved. Source

Hernandez-Rodriguez F.,Robotics and Advanced Manufacturing Group | Castelan M.,Robotics and Advanced Manufacturing Group
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Photometric sampling is a process where the surface normals of an object are estimated through the excitation of the object's surface and a rotating light source around it. The method can be regarded as a special case of photometric stereo when extensive sampling is performed in order to calculate surface normals. The classic photometric sampling approach considers only variations around the azimuth angle of the moving light source. As a consequence, additional attention has to be be paid to the recovery of the light source directions and the removal of specular and shadowed regions. This paper investigates the effect of including variations around the zenith angle of the light source vector in a photometric sampling framework, developing a geometric approach to estimate the surface normal vectors. Experiments show that increasing the number of samples along the zenith variation benefits the estimation of the surface normals. © 2012 Springer-Verlag Berlin Heidelberg. Source

Padilla-Calderon R.,Robotics and Advanced Manufacturing Group | Ordaz-Hernandez K.,Robotics and Advanced Manufacturing Group
2014 IEEE International Autumn Meeting on Power, Electronics and Computing, ROPEC 2014

In this paper we present a novel paradigm for reduction of the search space for visual localization of landmarks in deformable linear objects (DLOs) for robotic manipulation tasks. This has been achieved by statistical prediction of the location of the landmarks with a multivariate multiple linear regression model, using position and orientation of the end-effectors of the robot as independent variables and the x-y coordinates of the predicted locations as dependent variables. We implemented this approach with a NAO humanoid robot and a Kinect RGB camera, the results denote a reduction in almost 90% of the search space of the position of 6 landmarks in a deformable linear object with respect to the classical methods in which the search is performed in the whole image. © 2014 IEEE. Source

Discover hidden collaborations