Intelligent Robotics and Communication Laboratories

Kyoto, Japan

Intelligent Robotics and Communication Laboratories

Kyoto, Japan
Time filter
Source Type

Okumura Y.,Kyoto University | Kanakogi Y.,Kyoto University | Kanda T.,Intelligent Robotics and Communication Laboratories | Ishiguro H.,Intelligent Robotics and Communication Laboratories | And 2 more authors.
Cognition | Year: 2013

Social learning enables infants to acquire information, especially through communication. However, it is unknown whether humans are the prime source of information for infant learning. Here we report that humans have a powerful influence on infants' object learning compared with nonhuman agents (robots). Twelve-month-old infants were shown videos in which a human or a robot gazed at an object. The results demonstrated that the infants followed the gaze direction of both agents, but only human gaze facilitated their object learning: Infants showed enhanced processing of, and preferences for, the target object gazed at by the human but not by the robot. Importantly, an extended fixation on a target object without the orientation of human gaze did not produce these effects. Together, these findings show the importance of humanness in the gazer, suggesting that infants may be predisposed to treat humans as privileged sources of information for learning. © 2013 Elsevier B.V.

Saygin A.P.,University of California at San Diego | Saygin A.P.,University College London | Chaminade T.,University College London | Chaminade T.,Aix - Marseille University | And 5 more authors.
Social Cognitive and Affective Neuroscience | Year: 2012

Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the 'uncanny valley' phenomenon. © The Author(s) 2011. Published by Oxford University Press.

Kim Y.-J.,KAIST | Lee J.-Y.,Intelligent Robotics and Communication Laboratories | Lee J.-J.,KAIST
2012 IEEE International Conference on Information and Automation, ICIA 2012 | Year: 2012

In this paper, we present a balance control strategy for a walking biped robot in an unexpected external applied force. We assume that a sudden lateral and longitudinal force could be applied at the center of the pelvis of the biped robot during walking with a walking trajectory which is predesigned in advance by considering the energy-efficiency. With this situation, an balance control strategy is investigated and suggested. The balance control strategy is divided into four successive actions. Firstly, the robot should be controlled for the ZMP(Zero Moment Point) to move to the center of its sole. Secondly, the robot moves its swing leg to make the height of the sole of its swing leg lower. Thirdly, the swing leg of robot is stretched. Fourthly, the stance leg is contracted to make the robot in the double support phase. To verify the suggested strategy, computer simulation is performed. © 2012 IEEE.

Heracleous P.,CNRS GIPSA Laboratory | Heracleous P.,Intelligent Robotics and Communication Laboratories | Tran V.-A.,CNRS GIPSA Laboratory | Nagai T.,Nara Institute of Science and Technology | Shikano K.,Nara Institute of Science and Technology
IEEE Transactions on Audio, Speech and Language Processing | Year: 2010

Non-audible murmur (NAM) is an unvoiced speech signal that can be received through the body tissue with the use of special acoustic sensors (i.e., NAM microphones) attached behind the talker's ear. The authors had previously reported experimental results for NAM recognition using a stethoscopic and a silicon NAM microphone. Using a small amount of training data from a single speaker and adaptation approaches, 93.9% of word accuracy was achieved for a 20 k Japanese vocabulary dictation task. In this paper, further analysis of NAM speech is made using distance measures between hidden Markov models (HMMs). It has been shown that owing to the reduced spectral space of NAM speech, the HMM distances are also reduced when compared with those of normal speech. In the case of Japanese vowels and fricatives, the distance measures in NAM speech follow the same relative inter-phoneme relationship as that in normal speech without significant differences. However, significant differences have been found in the case of Japanese plosives. More specifically, in NAM speech, the distances between voiced/unvoiced consonant pairs articulated in the same place drastically decreased. As a result, the inter-phoneme relationship as compared to normal-speech changed significantly, causing a substantial decrease in the recognition accuracy. A speaker-dependent phoneme recognition experiment has been conducted, obtained 81.5% NAM phoneme correct, showing a relationship between HMM distance measures and phoneme accuracy. In a NAM microphone, body transmission and loss of lip radiation act as a low-pass filter. As a result, higher frequency components are attenuated in a NAM signal. Because of spectral reduction, NAM's unvoiced nature, and the type of articulation, NAM sounds become similar, causing a larger number of confusions when compared with normal speech. Yet many of those sounds are visually different on face/mouth/lips, and the integration of visual information increases their discrimination. As a result, recognition accuracy increases as well. In this article, the visual information extracted from the talkers' facial movements is fused with NAM speech. The experimental results reveal a relative improvement of 10.5% on average when fused NAM speech and facial information were used compared with using only NAM speech. © 2010 IEEE.

Zanlungo F.,Intelligent Robotics and Communication Laboratories | Zanlungo F.,Japan Science and Technology Agency | Ikeda T.,Intelligent Robotics and Communication Laboratories | Ikeda T.,Japan Science and Technology Agency | And 2 more authors.
EPL | Year: 2011

We introduce a new specification of the social force model in which pedestrians explicitly predict the place and time of the next collision in order to avoid it. This and other specifications of the social force model are calibrated, using genetic algorithms, on a set of pedestrian trajectories, obtained tracking with laser range finders the movement of pedestrians in controlled experiments, and their performance is compared. The results show that the proposed method has a better performance in describing the trajectory set. © 2011 Europhysics Letters Association.

Heracleous P.,Intelligent Robotics and Communication Laboratories | Hagita N.,Intelligent Robotics and Communication Laboratories
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | Year: 2011

This article introduces automatic recognition of speech without any audio information. Movements of the tongue, lips, and jaw are tracked by an Electro-Magnetic Articulography (EMA) device and are used as features to create hidden Markov models (HMMs) and conduct automatic speech recognition in a conventional way. The results obtained are promising, which confirm that phonetic features characterizing articulation are as discriminating as those characterizing acoustics (except for voicing). The results also show that using tongue parameters result in a higher accuracy compared with the lip parameters. © 2011 IEEE.

Zanlungo F.,Intelligent Robotics and Communication Laboratories | Zanlungo F.,Japan Science and Technology Agency | Ikeda T.,Intelligent Robotics and Communication Laboratories | Ikeda T.,Japan Science and Technology Agency | And 2 more authors.
PLoS ONE | Year: 2012

We propose a way to introduce in microscopic pedestrian models a "social norm" in collision avoiding and overtaking, i.e. the tendency, shared by pedestrians belonging to the same culture, to avoid collisions and perform overtaking in a preferred direction. The "social norm" is implemented, regardless of the specific collision avoiding model, as a rotation in the perceived velocity vector of the opponent at the moment of computation of the collision avoiding strategy, and justified as an expectation that the opponent will follow the same "social norm" (for example a tendency to avoid on the left and overtake on the right, as proposed in this work for Japanese pedestrians). By comparing with real world data, we show that the introduction of this norm allows for a better reproduction of macroscopic pedestrian density and velocity patterns. © 2012 Zanlungo et al.

Ikeda T.,Intelligent Robotics and Communication Laboratories | Chigodo Y.,Intelligent Robotics and Communication Laboratories | Rea D.,Intelligent Robotics and Communication Laboratories | Zanlungo F.,Intelligent Robotics and Communication Laboratories | And 2 more authors.
Robotics: Science and Systems | Year: 2013

This study addresses a method to predict pedestrians' long term behavior in order to enable a robot to provide them services. In order to do that we want to be able to predict their final goal and the trajectory they will follow to reach it. We attain this task borrowing from human science studies the concept of sub-goals, defined as points and landmarks of the environment towards which pedestrians walk or where they take directional choices before reaching the final destination. We retrieve the position of these sub-goals from the analysis of a large set of pedestrian trajectories in a shopping mall, and model their global behavior through transition probabilities between sub-goals. The method allows us to predict the future position of pedestrians on the basis of the observation of their trajectory up to the moment.1 Keywords-component; pedestrian models; sub-goal retrieval; behavior anticipation. © 2013 Massachusetts Institute of Technology.

Lee J.-W.,Korea Advanced Institute of Science and Technology | Lee J.-Y.,Intelligent Robotics and Communication Laboratories | Lee J.-J.,Korea Advanced Institute of Science and Technology
IEEE Wireless Communications Letters | Year: 2013

The Energy-Efficient Coverage (EEC) problem in unstructured Wireless Sensor Networks (WSNs) is an important issue because WSNs have limited energy. In this letter, we propose a novel stochastic optimization algorithm, called the Jenga-Inspired Optimization Algorithm (JOA), which overcomes some of the weaknesses of other optimization algorithms for solving the EEC problem. The JOA was inspired by Jenga which is a well-known board game. We also introduce the probabilistic sensor detection model, which leads to a more realistic approach to solving the EEC problem. Simulation results are conducted to verify the effectiveness of the JOA for solving the EEC problem in comparison with existing algorithms. © 2013 IEEE.

Glas D.F.,Hiroshi Ishiguro Laboratories | Kanda T.,Intelligent Robotics and Communication Laboratories | Ishiguro H.,Osaka University
ACM/IEEE International Conference on Human-Robot Interaction | Year: 2016

Interaction Composer, a visual programming environment designed to enable programmers and non-programmers to collaboratively design social human-robot interactions in the form of state-based flows, has been in use at our laboratory for eight years. The system architecture and the design principles behind the framework have been presented in other work, but in this paper we take a case-study approach, examining several actual examples of the use of this toolkit over an eight-year period. We examine the structure and content of interaction flows, identify common design patterns, and discuss elements of the framework which have proven valuable, features which did not solve their intended purposes, and ways that future systems might better address these issues. It is hoped that the insights gained from this study will contribute to the development of more effective and more usable tools and frameworks for interaction design. © 2016 IEEE.

Loading Intelligent Robotics and Communication Laboratories collaborators
Loading Intelligent Robotics and Communication Laboratories collaborators