CNRS Institute of Robotics and Intelligent Systems

Paris, France

CNRS Institute of Robotics and Intelligent Systems

Paris, France
SEARCH FILTERS
Time filter
Source Type

Dapogny A.,CNRS Institute of Robotics and Intelligent Systems | Bailly K.,CNRS Institute of Robotics and Intelligent Systems | Dubuisson S.,CNRS Institute of Robotics and Intelligent Systems
International Journal of Computer Vision | Year: 2017

Fully-automatic facial expression recognition (FER) is a key component of human behavior analysis. Performing FER from still images is a challenging task as it involves handling large interpersonal morphological differences, and as partial occlusions can occasionally happen. Furthermore, labelling expressions is a time-consuming process that is prone to subjectivity, thus the variability may not be fully covered by the training data. In this work, we propose to train random forests upon spatially-constrained random local subspaces of the face. The output local predictions form a categorical expression-driven high-level representation that we call local expression predictions (LEPs). LEPs can be combined to describe categorical facial expressions as well as action units (AUs). Furthermore, LEPs can be weighted by confidence scores provided by an autoencoder network. Such network is trained to locally capture the manifold of the non-occluded training data in a hierarchical way. Extensive experiments show that the proposed LEP representation yields high descriptive power for categorical expressions and AU occurrence prediction, and leads to interesting perspectives towards the design of occlusion-robust and confidence-aware FER systems. © 2017 Springer Science+Business Media New York


Cully A.,University Pierre and Marie Curie | Cully A.,CNRS Institute of Robotics and Intelligent Systems | Clune J.,University of Wyoming | Tarapore D.,University Pierre and Marie Curie | And 6 more authors.
Nature | Year: 2015

Robots have transformed many industries, most notably manufacturing, and have the power to deliver tremendous benefits to society, such as in search and rescue, disaster response, health care and transportation. They are also invaluable tools for scientific exploration in environments inaccessible to humans, from distant planets to deep oceans. A major obstacle to their widespread adoption in more complex environments outside factories is their fragility. Whereas animals can quickly adapt to injuries, current robots cannot think outside the box to find a compensatory behaviour when they are damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. A promising approach to reducing robot fragility involves having robots learn appropriate behaviours in response to damage, but current techniques are slow even with small, constrained search spaces. Here we introduce an intelligent trial-and-error algorithm that allows robots to adapt to damage in less than two minutes in large search spaces without requiring self-diagnosis or pre-specified contingency plans. Before the robot is deployed, it uses a novel technique to create a detailed map of the space of high-performing behaviours. This map represents the robotâ €™ s prior knowledge about what behaviours it can perform and their value. When the robot is damaged, it uses this prior knowledge to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a behaviour that compensates for the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new algorithm will enable more robust, effective, autonomous robots, and may shed light on the principles that animals use to adapt to injury. © 2015 Macmillan Publishers Limited. All rights reserved.


Sahbani A.,CNRS Institute of Robotics and Intelligent Systems | El-Khoury S.,Ecole Polytechnique Federale de Lausanne | Bidaud P.,CNRS Institute of Robotics and Intelligent Systems
Robotics and Autonomous Systems | Year: 2012

This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the fingerobject contact interactions Bicchi and Kumar (2000) [12] or robot hand design and their control Al-Gallaf et al. (1993) [70]. Robot grasp synthesis algorithms have been reviewed in Shimoga (1996) [71], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches. © 2011 Elsevier B.V. All rights reserved.


Lesaint F.,CNRS Institute of Robotics and Intelligent Systems | Sigaud O.,CNRS Institute of Robotics and Intelligent Systems | Flagel S.B.,University of Michigan | Robinson T.E.,University of Michigan | Khamassi M.,CNRS Institute of Robotics and Intelligent Systems
PLoS Computational Biology | Year: 2014

Reinforcement Learning has greatly influenced models of conditioning, providing powerful explanations of acquired behaviour and underlying physiological observations. However, in recent autoshaping experiments in rats, variation in the form of Pavlovian conditioned responses (CRs) and associated dopamine activity, have questioned the classical hypothesis that phasic dopamine activity corresponds to a reward prediction error-like signal arising from a classical Model-Free system, necessary for Pavlovian conditioning. Over the course of Pavlovian conditioning using food as the unconditioned stimulus (US), some rats (sign-trackers) come to approach and engage the conditioned stimulus (CS) itself - a lever - more and more avidly, whereas other rats (goal-trackers) learn to approach the location of food delivery upon CS presentation. Importantly, although both sign-trackers and goal-trackers learn the CS-US association equally well, only in sign-trackers does phasic dopamine activity show classical reward prediction error-like bursts. Furthermore, neither the acquisition nor the expression of a goal-tracking CR is dopamine-dependent. Here we present a computational model that can account for such individual variations. We show that a combination of a Model-Based system and a revised Model-Free system can account for the development of distinct CRs in rats. Moreover, we show that revising a classical Model-Free system to individually process stimuli by using factored representations can explain why classical dopaminergic patterns may be observed for some rats and not for others depending on the CR they develop. In addition, the model can account for other behavioural and pharmacological results obtained using the same, or similar, autoshaping procedures. Finally, the model makes it possible to draw a set of experimental predictions that may be verified in a modified experimental protocol. We suggest that further investigation of factored representations in computational neuroscience studies may be useful. © 2014 Lesaint et al.


Wiertlewski M.,CEA Fontenay-aux-roses | Wiertlewski M.,CNRS Institute of Robotics and Intelligent Systems | Hayward V.,CNRS Institute of Robotics and Intelligent Systems
Journal of Biomechanics | Year: 2012

It was previously suggested that the mechanical properties of the fingertip could be characterized by elasticity from dc to about 100. Hz and by viscosity above this frequency. Using a specifically designed high mobility probe, we accurately measured the impedance of the fingertips of seven participants under a variety of conditions relevant to purposeful touch. Direct measurements vindicated previous indirect observations. We also characterized the dependency of the fingertip impedance upon normal load, orientation, and time. © 2012 Elsevier Ltd.


Wiertlewski M.,French Atomic Energy Commission | Lozada J.,French Atomic Energy Commission | Hayward V.,CNRS Institute of Robotics and Intelligent Systems
IEEE Transactions on Robotics | Year: 2011

The tactual scanning of five naturalistic textures was recorded with an apparatus that is capable of measuring the tangential interaction force with a high degree of temporal and spatial resolution. The resulting signal showed that the transformation from the geometry of a surface to the force of traction and, hence, to the skin deformation experienced by a finger is a highly nonlinear process. Participants were asked to identify simulated textures reproduced by stimulating their fingers with rapid, imposed lateral skin displacements as a function of net position. They performed the identification task with a high degree of success, yet not perfectly. The fact that the experimental conditions eliminated many aspects of the interaction, including low-frequency finger deformation, distributed information, as well as normal skin movements, shows that the nervous system is able to rely on only two cues: amplitude and spectral information. The examination of the spatial spectrograms of the imposed lateral skin displacement revealed that texture could be represented spatially, despite being sensed through time and that these spectrograms were distinctively organized into what could be called spatial formants. This finding led us to speculate that the mechanical properties of the finger enables spatial information to be used for perceptual purposes in humans with no distributed sensing, which is a principle that could be applied to robots. © 2011 IEEE.


Chan-Hon-Tong A.,CEA Saclay Nuclear Research Center | Achard C.,CNRS Institute of Robotics and Intelligent Systems | Lucat L.,CEA Saclay Nuclear Research Center
Pattern Recognition | Year: 2014

Most researches on human activity recognition do not take into account the temporal localization of actions. In this paper, a new method is designed to model both actions and their temporal domains. This method is based on a new Hough method which outperforms previous published ones on honeybee dataset thanks to a deeper optimization of the Hough variables. Experiments are performed to select skeleton features adapted to this method and relevant to capture human actions. With these features, our pipeline improves state-of-the-art performances on TUM dataset and outperforms baselines on several public datasets. © 2014 Elsevier Ltd.


Terekhov A.V.,CNRS Institute of Robotics and Intelligent Systems | Hayward V.,CNRS Institute of Robotics and Intelligent Systems
Journal of Biomechanics | Year: 2011

The stick-to-slip transition of a fingertip in contact with a planar surface does not occur instantaneously. As the tangential load increases, portions of the skin adhere while others slip, giving rise to an evolution of the contact state, termed partial slip. We develop a quasi-static model that predicts that if the coefficient of kinetic friction is larger than the coefficient of static friction, then the stuck surface area diminishes as the tangential load increases until reaching a 'minimal adhesion surface area' where it vanishes abruptly. This phenomenon was observed in recently measured finger-slip image data (André et al., 2011) that were processed by an optic flow detection algorithm. We examined the results of 10 trials. Four of them exhibited the minimal adhesion surface area phenomenon, four of them did not, and two were inconclusive. © 2011 Elsevier Ltd.


Guigon E.,CNRS Institute of Robotics and Intelligent Systems
Journal of Neurophysiology | Year: 2010

Posture and movement are fundamental, intermixed components of motor coordination. Current approaches consider either that 1) movement is an active, anticipatory process and posture is a passive feedback process or 2) movement and posture result from a common passive process. In both cases, the presence of a passive component renders control scarcely robust and stable in the face of transmission delays and low feedback gains. Here we show in a model that posture and movement could result from the same active process: an optimal feedback control that drives the body from its estimated state to its goal in a given (planning) time by acting through muscles on the insertion position (bias) of compliant linkages (tendons). Computer simulations show that iteration of this process in the presence of noise indifferently produces realistic postural sway, fast goal-directed movements, and natural transitions between posture and movement. Copyright © 2010 The American Physiological Society.


Rigoux L.,CNRS Institute of Robotics and Intelligent Systems | Guigon E.,CNRS Institute of Robotics and Intelligent Systems
PLoS Computational Biology | Year: 2012

Costs (e.g. energetic expenditure) and benefits (e.g. food) are central determinants of behavior. In ecology and economics, they are combined to form a utility function which is maximized to guide choices. This principle is widely used in neuroscience as a normative model of decision and action, but current versions of this model fail to consider how decisions are actually converted into actions (i.e. the formation of trajectories). Here, we describe an approach where decision making and motor control are optimal, iterative processes derived from the maximization of the discounted, weighted difference between expected rewards and foreseeable motor efforts. The model accounts for decision making in cost/benefit situations, and detailed characteristics of control and goal tracking in realistic motor tasks. As a normative construction, the model is relevant to address the neural bases and pathological aspects of decision making and motor control. © 2012 Rigoux, Guigon.

Loading CNRS Institute of Robotics and Intelligent Systems collaborators
Loading CNRS Institute of Robotics and Intelligent Systems collaborators