ATR Computational Neuroscience Labs

Kyoto, Japan

ATR Computational Neuroscience Labs

Kyoto, Japan

Time filter

Source Type

Morimoto J.,ATR Computational Neuroscience Labs. | Noda T.,ATR Computational Neuroscience Labs. | Hyon S.-H.,ATR Computational Neuroscience Labs. | Hyon S.-H.,Ritsumeikan University
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2012

In this study, we propose a control method for movement assistive robots using measured signals from human users. Some of the wearable assistive robots have mechanisms that can be adjusted to human kinematics (e.g., adjustable link length). However, since the human body has a complicated joint structure, it is generally difficult to design an assistive robot which mechanically well fits human users. We focus on the development of a control algorithm to generate corresponding movements of wearable assistive robots to that of human users even when the kinematic structures of the assistive robot and the human user are different. We first extract the latent kinematic relationship between a human user and the assistive robot. The extracted relationship is then used to control the assistive robot by converting human behavior into the corresponding joint angle trajectories of the robot. The proposed approach is evaluated by a simulated robot model and our newly developed exoskeleton robot. © 2012 IEEE.


Gams A.,Jozef Stefan Institute | Ude A.,Jozef Stefan Institute | Morimoto J.,ATR Computational Neuroscience Labs
IEEE International Conference on Intelligent Robots and Systems | Year: 2015

Human-demonstrated motion transferred to a robotic platform often needs to be adapted to the current state of the environment or to modified task requirements. Adaptation, i. e. learning of a modified behavior, needs to be fast to enable quick utilization of the robot either in industry or in future household-assistant tasks. In this paper we show how to accelerate trajectory adaptation based on learning of coupling terms in the framework of dynamic movement primitives (DMPs). Our method applies ideas from feedback error learning to iterative learning control (ILC). By taking into account the actual physical constraints of the synchronous motion - through synchronization of both positions (or forces) and velocities - it is not only a more faithful representation of actual real-world processes, but it also accelerates the speed of convergence. To show the applicability of the approach in the framework of DMPs, we tested it on a formulation which encodes an initial discrete motion, followed by a periodic behavior, all in a single system. Modifications of the original discrete-periodic formulation now also allow for the use of DMP temporal scaling property. In the paper we also show how the DMP coupling can be implemented in joint space, whereas the measured forces and previous approaches always remained in the task space. We applied our approach to an example dual-arm synchronization task on Sarcos humanoid robot CB-i. © 2015 IEEE.


Katori Y.,University of Tokyo | Lang E.J.,New York University | Onizuka M.,Nara Institute of Science and Technology | Kawato M.,University of Tokyo | And 2 more authors.
International Journal of Bifurcation and Chaos | Year: 2010

Inferior olive (IO) neurons project to the cerebellum and contribute to motor control. They can show intriguing spatio-temporal dynamics with rhythmic and synchronized spiking. IO neurons are connected to their neighbors via gap junctions to form an electrically coupled network, and so it is considered that this coupling contributes to the characteristic dynamics of this nucleus. Here, we demonstrate that a gap junction-coupled network composed of simple conductance-based model neurons (a simplified version of a HodgkinHuxley type neuron) reproduce important aspects of IO activity. The simplified phenomenological model neuron facilitated the analysis of the single cell and network properties of the IO while still quantitatively reproducing the spiking patterns of complex spike activity observed by simultaneous recording in anesthetized rats. The results imply that both intrinsic bistability of each neuron and gap junction coupling among neurons play key roles in the generation of the spatio-temporal dynamics of IO neurons. © World Scientific Publishing Company.


Noda T.,ATR Computational Neuroscience Labs | Teramae T.,ATR Computational Neuroscience Labs | Ugurlu B.,ATR Computational Neuroscience Labs | Morimoto J.,ATR Computational Neuroscience Labs
IEEE International Conference on Intelligent Robots and Systems | Year: 2014

In this paper, we introduce our ongoing work on the development of an upper body exoskeleton robot, driven by a pneumatic-electric hybrid actuation system. Since the limb of an exoskeleton robot needs to have small inertia to achieve agility and safety, using a heavy actuator is not preferable. Furthermore, we need to use backdrivable actuators that can generate sufficiently large torques to support user movements. These two requirements may seem contradictory. In order to cope with this development problem, we use a hybrid actuation system composed of Pneumatic Artificial Muscles (PAMs) and small-size electromagnetic motors. Although we and other research groups have already presented the advantage of the hybrid actuation system, we newly propose the usage of Bowden cable in a hybrid actuator to transmit the force generated by the PAMs to joints of our exoskeleton robot so that we can design a compact upper limb with small inertia. In addition, small size electric motors are mechanically connected to joints in order to compensate uncertainty generated by the PAM dynamics and the Bowden cable. We demonstrate that the proposed joint is backdrivable with the capability of large torque generation for the gravity compensation task both in One-DOF system with a dummy weight and right arm of the upper body exoskeleton with a mannequin arm. We also show the right arm exoskeleton can be moved using a torque input, extracted from sensory information via a goniometer. © 2014 IEEE.


Morimoto J.,ATR Computational Neuroscience Labs | Kawato M.,ATR Brain Information Communication Research Laboratory Group
Journal of the Royal Society Interface | Year: 2015

In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.


Tangkaratt V.,Tokyo Institute of Technology | Mori S.,Tokyo Institute of Technology | Zhao T.,Tokyo Institute of Technology | Morimoto J.,ATR Computational Neuroscience Labs | Sugiyama M.,Tokyo Institute of Technology
Neural Networks | Year: 2014

The goal of reinforcement learning (RL) is to let an agent learn an optimal control policy in an unknown environment so that future expected rewards are maximized. The model-free RL approach directly learns the policy based on data samples. Although using many samples tends to improve the accuracy of policy learning, collecting a large number of samples is often expensive in practice. On the other hand, the model-based RL approach first estimates the transition model of the environment and then learns the policy based on the estimated transition model. Thus, if the transition model is accurately learned from a small amount of data, the model-based approach is a promising alternative to the model-free approach. In this paper, we propose a novel model-based RL method by combining a recently proposed model-free policy search method called policy gradients with parameter-based exploration and the state-of-the-art transition model estimator called least-squares conditional density estimation. Through experiments, we demonstrate the practical usefulness of the proposed method. © 2014 Elsevier Ltd.


Noda T.,ATR Computational Neuroscience Labs | Furukawa J.-I.,ATR Computational Neuroscience Labs | Furukawa J.-I.,Osaka University | Teramae T.,ATR Computational Neuroscience Labs | And 3 more authors.
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2013

This study proposes the design of electromyography (EMG)-based force feedback controller which explicitly considers human-robot interaction for the exoskeletal assistive robot. Conventional approaches have been only consider one-directional mapping from EMG to control input for assistive robot control. However, EMG and force generated by the assistive robot interfere each other, e.g., amplitude of EMG decreases if limb movements are assisted by the robot. In our proposed method, we first derive the nonlinear mapping from EMG signal to muscle force for estimating human joint torque, and convert it to assistive force using human musculoskeletal model and robot kinematic model. Additionally the feedforward interaction torque is feedback into torque controller to acquire the necessity loads. To validate the feasibility of the proposed method, assistive One-DOF system was developed as the real equipment and the simulator. We compared the proposed method with conventional approaches using both the simulated and the real One-DOF systems. As the result, we found that the proposed model was able to estimate the necessary torque adequately to achieve stable human-robot interaction. © 2013 IEEE.


Noda T.,ATR Computational Neuroscience Labs | Sugimoto N.,Center for Information and Neural Networks | Furukawa J.,Osaka University | Sato M.-A.,Ritsumeikan University | And 2 more authors.
IEEE-RAS International Conference on Humanoid Robots | Year: 2012

In this paper, we introduce our attempt to develop an assistive robot system which can contribute to Brain-Machine Interface (BMI) rehabilitation. For the BMI rehabilitation, we construct a Electroencephalogram(EEG)-Exoskeleton robot system, where the exoskeleton robot is connected to the EEG system so that the users can control the exoskeleton robot by using their brain activities. We use a classification method which considers covariance matrices of measured EEG signals as inputs to decode brain activities. The decoded brain activities are used to control exoskeleton movements. In this study, we consider assisting the stand-up movement which is one of the most frequently appeared movements in daily life and also a standard movement as a rehabilitation training. To assist the stand-up movement, we develop a force control model which takes dynamics of tendon string into account for the pneumatic-electric hybrid actuation system used in our exoskeleton robot. The results show that the exoskeleton robot successfully assisted user stand-up movements, where the assist system was activated by the decoded brain activities. © 2012 IEEE.


Furukawa J.-I.,ATR Computational Neuroscience Labs | Furukawa J.-I.,Osaka University | Noda T.,ATR Computational Neuroscience Labs | Teramae T.,ATR Computational Neuroscience Labs | Morimoto J.,ATR Computational Neuroscience Labs
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2015

In this paper, we propose an estimation method of human joint movements from measured EMG signals for assistive robot control. We focus on how to estimate joint movements using multiple EMG electrodes even under sensor failure situations. In real world applications, EMG sensor electrodes might become disconnected or detached from skin surfaces. If we consider EMG-based robot control for assistive robots, such sensor failures lead to significant errors in the estimation of user joint movements. To cope with these sensor failures, we propose a state estimation model that takes uncertain observations into account. Sensor channel anomalies are found by checking the covariance of the EMG signals measured by multiple EMG electrodes. To validate the proposed control framework, we artificially disconnect an EMG electrode or detach one side of an EMG probe from the skin surface during elbow joint movement estimation. We show proper control of a one-DOF exoskeleton robot based on the estimated joint torque using our proposed method even when one EMG electrode has a sensor problem; a standard method with no tolerability against uncertain observations was unable to deal with these fault situations. Furthermore, the errors of the estimated joint torque with our proposed method were smaller than the standard method or a method with a conventional sensor fault detection algorithm. © 2015 IEEE.


PubMed | Jozef Stefan Institute and ATR Computational Neuroscience Labs
Type: Journal Article | Journal: PloS one | Year: 2016

In this paper we propose an exoskeleton control method for adaptive learning of assistive joint torque profiles in periodic tasks. We use human muscle activity as feedback to adapt the assistive joint torque behaviour in a way that the muscle activity is minimised. The user can then relax while the exoskeleton takes over the task execution. If the task is altered and the existing assistive behaviour becomes inadequate, the exoskeleton gradually adapts to the new task execution so that the increased muscle activity caused by the new desired task can be reduced. The advantage of the proposed method is that it does not require biomechanical or dynamical models. Our proposed learning system uses Dynamical Movement Primitives (DMPs) as a trajectory generator and parameters of DMPs are modulated using Locally Weighted Regression. Then, the learning system is combined with adaptive oscillators that determine the phase and frequency of motion according to measured Electromyography (EMG) signals. We tested the method with real robot experiments where subjects wearing an elbow exoskeleton had to move an object of an unknown mass according to a predefined reference motion. We further evaluated the proposed approach on a whole-arm exoskeleton to show that it is able to adaptively derive assistive torques even for multiple-joint motion.

Loading ATR Computational Neuroscience Labs collaborators
Loading ATR Computational Neuroscience Labs collaborators