Entity

Time filter

Source Type

Japan

Matsubara T.,Nara Institute of Science and Technology | Hyon S.-H.,Ritsumeikan University | Morimoto J.,ATR CNS
IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings | Year: 2010

In this paper, we propose a novel concept of movement primitives called Stylistic Dynamic Movement Primitives (SDMPs) for motor learning and control in humanoid robotics. In the SDMPs, a diversity of styles in human motion observed through multiple demonstrations can be compactly encoded in a movement primitive, and this allows style manipulation of motion sequences generated from the movement primitive by a control variable called a style parameter. Focusing on discrete movements, a model of the SDMPs is presented as an extension of Dynamic Movement Primitives (DMPs) proposed by Ijspeert et al. [1]. A novel learning procedure of the SDMPs from multiple demonstrations, including a diversity of motion styles, is also described. We present two practical applications of the SDMPs, i.e., stylistic table tennis swings and obstacle avoidance with an anthropomorphic manipulator. ©2010 IEEE.


Hyon S.-H.,Ritsumeikan University | Morimoto J.,ATR CNS
Neural Networks | Year: 2011

Learning from demonstration has shown to be a suitable approach for learning control policies (CPs). However, most previous studies learn CPs from a single demonstration, which results in limited scalability and insufficient generalization toward a wide range of applications in real environments. This paper proposes a novel approach to learn highly scalable CPs of basis movement skills from multiple demonstrations. In contrast to conventional studies with a single demonstration, i.e., dynamic movement primitives (DMPs), our approach efficiently encodes multiple demonstrations by shaping a parametric-attractor landscape in a set of differential equations. Assuming a certain similarity among multiple demonstrations, our approach learns the parametric-attractor landscape by extracting a small number of common factors in multiple demonstrations. The learned CPs allow the synthesis of novel movements with novel motion styles by specifying the linear coefficients of the bases as parameter vectors without losing useful properties of the DMPs, such as stability and robustness against perturbations. For both discrete and rhythmic movement skills, we present a unified learning procedure for learning a parametric-attractor landscape from multiple demonstrations. The feasibility and highly extended scalability of DMPs are demonstrated on an actual dual-arm robot. © 2011 Elsevier Ltd.


Matsubara T.,Nara Institute of Science and Technology | Hyon S.-H.,ATR CNS | Hyon S.-H.,Ritsumeikan University | Morimoto J.,ATR CNS
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

This paper proposes a novel approach to learn highly scalable Control Policies (CPs) of basis movement skills from multiple demonstrations. In contrast to conventional studies with a single demonstration, i.e., Dynamic Movement Primitives (DMPs) [1], our approach efficiently encodes multiple demonstrations by shaping a parametric-attractor landscape in a set of differential equations. This approach allows the learned CPs to synthesize novel movements with novel motion styles by specifying the linear coefficients of the bases as parameter vectors without losing useful properties of DMPs, such as stability and robustness against perturbations. For both discrete and rhythmic movement skills, we present a unified learning procedure for learning a parametric-attractor landscape from multiple demonstrations. The feasibility and highly extended scalability of DMPs are demonstrated on an actual dual-arm robot. © 2010 Springer-Verlag.


Hyon S.-H.,ATR CNS | Hyon S.-H.,Ritsumeikan University | Morimoto J.,ATR CNS
Neural Networks | Year: 2012

The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. © 2011 Elsevier Ltd.


Matsubara T.,ATR CNS | Matsubara T.,Nara Institute of Science and Technology | Uchikata A.,ATR CNS | Uchikata A.,Nara Institute of Science and Technology | Morimoto J.,ATR CNS
IEEE International Conference on Intelligent Robots and Systems | Year: 2012

We propose an adaptive walking assistance strategy to control an exoskeleton robot. In our proposed framework, we explicitly consider the following: 1) the diversity of user motions (style) and 2) the interactions among a user, a robot, and an environment. To spatially coordinate a wide variety of user motions and robot behaviors, we estimated style parameters from observed user movements. To temporally coordinate the interactions among the user, the robot, and the environment, we synchronized the phases of these three systems with a coupled oscillator model. The estimated style parameters and the phase of the user motion can be used to predict future user movements. We investigated how movement prediction and phase synchronization can be beneficial to control an exoskeleton robot. To evaluate our adaptive walking assistance strategy, we developed simulated user and exoskeleton models. The physical interactions among the user, the exoskeleton, and the ground models are introduced in the simulated system. We show that the necessary torque for the user walking movement was reduced around 40% by using our proposed method to control the exoskeleton model. © 2012 IEEE.

Discover hidden collaborations