Entity

Time filter

Source Type


Wiertlewski M.,French Atomic Energy Commission | Lozada J.,French Atomic Energy Commission | Hayward V.,CNRS Institute of Robotics and Intelligent Systems
IEEE Transactions on Robotics | Year: 2011

The tactual scanning of five naturalistic textures was recorded with an apparatus that is capable of measuring the tangential interaction force with a high degree of temporal and spatial resolution. The resulting signal showed that the transformation from the geometry of a surface to the force of traction and, hence, to the skin deformation experienced by a finger is a highly nonlinear process. Participants were asked to identify simulated textures reproduced by stimulating their fingers with rapid, imposed lateral skin displacements as a function of net position. They performed the identification task with a high degree of success, yet not perfectly. The fact that the experimental conditions eliminated many aspects of the interaction, including low-frequency finger deformation, distributed information, as well as normal skin movements, shows that the nervous system is able to rely on only two cues: amplitude and spectral information. The examination of the spatial spectrograms of the imposed lateral skin displacement revealed that texture could be represented spatially, despite being sensed through time and that these spectrograms were distinctively organized into what could be called spatial formants. This finding led us to speculate that the mechanical properties of the finger enables spatial information to be used for perceptual purposes in humans with no distributed sensing, which is a principle that could be applied to robots. © 2011 IEEE. Source


Cully A.,University Pierre and Marie Curie | Cully A.,CNRS Institute of Robotics and Intelligent Systems | Clune J.,University of Wyoming | Tarapore D.,University Pierre and Marie Curie | And 6 more authors.
Nature | Year: 2015

Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot think outside the box to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robotâ €™ s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury. © 2015 Macmillan Publishers Limited. All rights reserved. Source


Sahbani A.,CNRS Institute of Robotics and Intelligent Systems | El-Khoury S.,Ecole Polytechnique Federale de Lausanne | Bidaud P.,CNRS Institute of Robotics and Intelligent Systems
Robotics and Autonomous Systems | Year: 2012

This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the fingerobject contact interactions Bicchi and Kumar (2000) [12] or robot hand design and their control Al-Gallaf et al. (1993) [70]. Robot grasp synthesis algorithms have been reviewed in Shimoga (1996) [71], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches. © 2011 Elsevier B.V. All rights reserved. Source


Terekhov A.V.,CNRS Institute of Robotics and Intelligent Systems | Hayward V.,CNRS Institute of Robotics and Intelligent Systems
Journal of Biomechanics | Year: 2011

The stick-to-slip transition of a fingertip in contact with a planar surface does not occur instantaneously. As the tangential load increases, portions of the skin adhere while others slip, giving rise to an evolution of the contact state, termed partial slip. We develop a quasi-static model that predicts that if the coefficient of kinetic friction is larger than the coefficient of static friction, then the stuck surface area diminishes as the tangential load increases until reaching a 'minimal adhesion surface area' where it vanishes abruptly. This phenomenon was observed in recently measured finger-slip image data (André et al., 2011) that were processed by an optic flow detection algorithm. We examined the results of 10 trials. Four of them exhibited the minimal adhesion surface area phenomenon, four of them did not, and two were inconclusive. © 2011 Elsevier Ltd. Source


Guigon E.,CNRS Institute of Robotics and Intelligent Systems
Journal of Neurophysiology | Year: 2010

Posture and movement are fundamental, intermixed components of motor coordination. Current approaches consider either that 1) movement is an active, anticipatory process and posture is a passive feedback process or 2) movement and posture result from a common passive process. In both cases, the presence of a passive component renders control scarcely robust and stable in the face of transmission delays and low feedback gains. Here we show in a model that posture and movement could result from the same active process: an optimal feedback control that drives the body from its estimated state to its goal in a given (planning) time by acting through muscles on the insertion position (bias) of compliant linkages (tendons). Computer simulations show that iteration of this process in the presence of noise indifferently produces realistic postural sway, fast goal-directed movements, and natural transitions between posture and movement. Copyright © 2010 The American Physiological Society. Source

Discover hidden collaborations