Entity

Time filter

Source Type


Ugurlu B.,Ozyegin University | Ugurlu B.,Advanced Telecommunications Research Institute | Nishimura M.,Toyota Technological Institute | Nishimura M.,Suzuki Motor Corporation | And 3 more authors.
IEEE Transactions on Human-Machine Systems | Year: 2015

This paper presents a wearable upper body exoskeleton system with a model-based compensation control framework to support robot-aided shoulder-elbow rehabilitation and power assistance tasks. To eliminate the need for EMG and force sensors, we exploit off-the-shelf compensation techniques developed for robot manipulators. Thus, target rehabilitation tasks are addressed by using only encoder readings. A proof-of-concept evaluation was conducted with five able-bodied participants. The patient-active rehabilitation task was realized via observer-based user torque estimation, in which resistive forces were adjusted using virtual impedance. In the patient-passive rehabilitation task, the proposed controller enabled precise joint tracking with a maximum positioning error of 0.25°. In the power assistance task, the users' muscular activities were reduced up to 85% while exercising with a 5 kg dumbbell. Therefore, the exoskeleton system was regarded as being useful for the target tasks, indicating that it has a potential to promote robot-aided therapy protocols. © 2013 IEEE. Source


Callan D.E.,Osaka University | Callan D.E.,Universal Communication Research Institute | Terzibas C.,Universal Communication Research Institute | Cassel D.B.,Locomobi Inc | And 2 more authors.
Frontiers in Human Neuroscience | Year: 2016

The goal of this research is to test the potential for neuroadaptive automation to improve response speed to a hazardous event by using a brain-computer interface (BCI) to decode perceptual-motor intention. Seven participants underwent four experimental sessions while measuring brain activity with magnetoencephalograpy. The first three sessions were of a simple constrained task in which the participant was to pull back on the control stick to recover from a perturbation in attitude in one condition and to passively observe the perturbation in the other condition. The fourth session consisted of having to recover from a perturbation in attitude while piloting the plane through the Grand Canyon constantly maneuvering to track over the river below. Independent component analysis was used on the first two sessions to extract artifacts and find an event related component associated with the onset of the perturbation. These two sessions were used to train a decoder to classify trials in which the participant recovered from the perturbation (motor intention) vs. just passively viewing the perturbation. The BCI-decoder was tested on the third session of the same simple task and found to be able to significantly distinguish motor intention trials from passive viewing trials (mean = 69.8%). The same BCI-decoder was then used to test the fourth session on the complex task. The BCI-decoder significantly classified perturbation from no perturbation trials (73.3%) with a significant time savings of 72.3 ms (Original response time of 425.0-352.7 ms for BCI-decoder). The BCI-decoder model of the best subject was shown to generalize for both performance and time savings to the other subjects. The results of our off-line open loop simulation demonstrate that BCI based neuroadaptive automation has the potential to decode motor intention faster than manual control in response to a hazardous perturbation in flight attitude while ignoring ongoing motor and visual induced activity related to piloting the airplane. © 2016 Callan, Terzibas, Cassel, Sato and Parasuraman. Source


Ugur E.,University of Innsbruck | Ugur E.,Advanced Telecommunications Research Institute | Nagai Y.,Osaka University | Celikkanat H.,Middle East Technical University | And 2 more authors.
Robotica | Year: 2015

Parental scaffolding is an important mechanism that speeds up infant sensorimotor development. Infants pay stronger attention to the features of the objects highlighted by parents, and their manipulation skills develop earlier than they would in isolation due to caregivers' support. Parents are known to make modifications in infant-directed actions, which are often called motionese7. The features that might be associated with motionese are amplification, repetition and simplification in caregivers' movements, which are often accompanied by increased social signalling. In this paper, we extend our previously developed affordances learning framework to enable our hand-arm robot equipped with a range camera to benefit from parental scaffolding and motionese. We first present our results on how parental scaffolding can be used to guide the robot learning and to modify its crude action execution to speed up the learning of complex skills. For this purpose, an interactive human caregiver-infant scenario was realized with our robotic setup. This setup allowed the caregiver's modification of the ongoing reach and grasp movement of the robot via physical interaction. This enabled the caregiver to make the robot grasp the target object, which in turn could be used by the robot to learn the grasping skill. In addition to this, we also show how parental scaffolding can be used in speeding up imitation learning. We present the details of our work that takes the robot beyond simple goal-level imitation, making it a better imitator with the help of motionese. © 2014 Cambridge University Press. Source


Sakata M.,Advanced Telecommunications Research Institute | Yucel Z.,Advanced Telecommunications Research Institute | Shinozawa K.,Advanced Telecommunications Research Institute | Hagita N.,Advanced Telecommunications Research Institute | And 3 more authors.
ACM Transactions on Management Information Systems | Year: 2013

Common periodical health check-ups include several clinical test items with affordable cost. However, these standard tests do not directly indicate signs of most lifestyle diseases. In order to detect such diseases, a number of additional specific clinical tests are required, which increase the cost of the health check-up. This study aims to enrich our understanding of the common health check-ups and proposes a way to estimate the signs of several lifestyle diseases based on the standard tests in common examinations without performing any additional specific tests. In this manner, we enable a diagnostic process, where the physician may prefer to perform or avoid a costly test according to the estimation carried out through a set of common affordable tests. To that end, the relation between standard and specific test results is modeled with a multivariate kernel density estimate. The condition of the patient regarding a specific test is assessed following a Bayesian framework. Our results indicate that the proposed method achieves an overall estimation accuracy of 84%. In addition, an outstanding estimation accuracy is achieved for a subset of high-cost tests.Moreover, comparison with standard artificial intelligence methods suggests that our algorithm outperforms the conventional methods. Our contributions are as follows: (i) promotion of affordable health check-ups, (ii) high estimation accuracy in certain tests, (iii) generalization capability due to ease of implementation on different platforms and institutions, (iv) flexibility to apply to various tests and potential to improve early detection rates. © 2013 ACM. Source


News Article | September 9, 2016
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

Volunteers who started an experiment feeling neutral about certain faces they saw ended up unknowingly adopting the feelings that scientists induced via an MRI feedback technique, according to newly published research. The study in PLOS Biology therefore suggests that there is a single region of the brain where both positive and negative feelings for faces take shape and provides the second demonstration this year that the MRI technique can be used to train a mental process in an unknowing subject. This spring, the team used the same method to associate the perception of color with the context of a pattern so strongly that volunteers saw the color when cued by the pattern, even if the color wasn't really there. In the new study, the researchers sought to determine whether they could direct feelings about faces -- a more sophisticated brain function that is closer to their eventual goal, which is to develop the technique to the point where it could become a tool for psychological therapy, for instance for anxiety. "Face recognition is a very important social function for people," said co-author Takeo Watanabe, the Fred M. Seed Professor of Cognitive and Linguistic Sciences at Brown University. "Facial recognition is associated with people's emotions." The technique, which the researchers call "DecNef," for decoded neurofeedback, starts with detecting and analyzing the specific activity patterns in a brain region that correspond to a mental state. For example, at the beginning of the new study, while 24 volunteers saw hundreds of faces and rated their sentiments about each of them (on a scale of 1 for dislike to 10 for like, with 5 for neutral), the researchers used MRI to record the patterns of activity in a brain region called the cingulate cortex. That step alone was fairly conventional neuroscience except that many scientists believe that positive or negative feelings about faces are formulated in separate brain regions. But this team of four researchers at Brown University and the Advanced Telecommunications Research Institute International in Kyoto, Japan, wanted to test whether the cingulate cortex handles both sides of the emotion. Sure enough, the researchers' software, called a decoder, was able to analyze the recordings to identify reliable and distinct patterns in each volunteer's cingulate cortex associated with positive and negative feelings about faces. "We found that the cingulate cortex seems to handle both opposing directions with different activity patterns," said co-author Yuka Sasaki, associate professor (research) of cognitive, linguistic and psychological sciences at Brown. With these signature patterns established for each volunteer, the participants were then unknowingly divided into two groups of 12 -- either positive or negative -- and were called back in for a few days of additional research in the MRI machine. In this phase the subjects were shown a subset of the faces they rated as neutral and were then asked to perform a seemingly unrelated task: After seeing each face on the screen, they were then shown a disk and asked to somehow use their minds to try to make it appear as big as possible. The bigger they could make the disk, they were told, the more of a small monetary reward they could receive. In reality, the tasks weren't unrelated. Participants didn't know this at the time, but the only way the disk would grow was when the MRI readings showed that they happened (for whatever reason) to produce their signature patterns of positive or negative feelings about faces in their cingulate cortex. In other words, the experiment rewarded volunteers in the positive group with a larger disk when they produced the pattern associated with liking the faces after seeing a previously neutral one. Similarly, the experiment rewarded volunteers in the negative group with a growing disk the more they happened to produce the pattern associated with dislike after seeing a neutral one. In essence, DecNef aims to train people to produce specific feelings or perceptions in specific contexts by rewarding those moments when they unknowingly do so. A third group of six other participants was used as a control group. They saw faces and rated them, but were not given the DecNef step of having to enlarge a disk in association with the activation patterns that represent positive or negative feelings. Finally, all the participants were then queried anew about their feelings regarding the initially neutral faces. When the researchers analyzed the results, they were able to make several key findings. On average, the positive group's ratings of the neutral faces moved up mildly but significantly (by about 0.6 on the 1 to 10 scale), while the negative group's ratings of the faces moved down a bit less but still significantly. Meanwhile the control group's ratings didn't change significantly at all. "From all these results we conclude that association of originally neutrally rated faces with covert induction of activity patterns in the single brain region, the cingulate cortex, led to changes in facial preference specifically for those faces, and in a specific preference -- positive or negative -- direction," the authors wrote in the study. To be as certain as possible about the findings, they did a few more analyses. In post-experiment interviews, they asked the subjects whether they knew what was really going on -- none did. Then the researchers explained what the experiment was really about and asked people to say whether they thought they were in the positive or negative group. People were no better than chance at saying which they were in. Together these results suggest that none of the experimental volunteers changed their preferences about neutral faces based on their own will or intention. In another analysis, the researchers crunched the numbers to see if the degree of activity in the cingulate cortex during the disk-enlargement phase correlated with the degree of change in preferences. The results revealed a high correlation (0.78 out of 1). In other words, the amount of brain activity was proportional to the amount of induced feeling. While the induced changes in feeling were mild, the training took place over only a few days, the researchers noted. Training that occurs on the scale of weeks, as is often required for clinical therapies, might have induced stronger feelings. But even a small effect could be beneficial for people if it blunts a persistently painful feeling associated with a certain trigger, Watanabe said. "If someone develops a traumatic memory that makes him or her suffer, even a small reduction of the suffering would be helpful," Watanabe said. The researchers also said they are aware that there could be potential abuse or misuse of the technique -- a kind of brainwashing -- so it might be good if it proves at least somewhat limited in its effect. In addition to Sasaki and Watanabe, the paper's other authors are lead author Kazuhisa Shibata and corresponding author Mitsuo Kawato.

Discover hidden collaborations