Universal Communication Research Institute

Japan

Universal Communication Research Institute

Japan
SEARCH FILTERS
Time filter
Source Type

Callan D.,Osaka University | Callan D.,Universal Communication Research Institute | Callan A.,Osaka University | Callan A.,Universal Communication Research Institute | Jones J.,Wilfrid Laurier University
Frontiers in Neuroscience | Year: 2014

Brain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds/r/and/l/that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity. For native English speakers, the Japanese accented speech was more difficult to categorize than the unaccented English speech. In contrast, Japanese speakers have difficulty distinguishing between/r/and/l/, so both the Japanese accented and English unaccented speech were difficult to categorize. Brain regions involved with listening to foreign-accented productions of a first language included primarily the right cerebellum, left ventral inferior premotor cortex PMvi, and Broca's area. Brain regions most involved with listening to a second-language phonetic contrast (foreign-accented and unaccented productions) also included the left PMvi and the right cerebellum. Additionally, increased activity was observed in the right PMvi, the left and right ventral superior premotor cortex PMvs, and the left cerebellum. These results support a role for speech motor regions during the perception of foreign-accented native speech and for perception of difficult second-language phonetic contrasts. © 2014 Callan, Callan and Jones.


Sasaki H.,Universal Communication Research Institute | Yamamoto K.,Universal Communication Research Institute | Wakunami K.,Universal Communication Research Institute | Ichihashi Y.,Universal Communication Research Institute | And 2 more authors.
Scientific Reports | Year: 2014

In this paper, we propose a new method of using multiple spatial light modulators (SLMs) to increase the size of three-dimensional (3D) images that are displayed using electronic holography. The scalability of images produced by the previous method had an upper limit that was derived from the path length of the image-readout part. We were able to produce larger colour electronic holographic images with a newly devised space-saving image-readout optical system for multiple reflection-type SLMs. This optical system is designed so that the path length of the image-readout part is half that of the previous method. It consists of polarization beam splitters (PBSs), half-wave plates (HWPs), and polarizers. We used 16 (4 × 4) 4K×2K-pixel SLMs for displaying holograms. The experimental device we constructed was able to perform 20 fps video reproduction in colour of full-parallax holographic 3D images with a diagonal image size of 85 mm and a horizontal viewing-zone angle of 5.6 degrees.


Sasaki H.,Universal Communication Research Institute | Yamamoto K.,Universal Communication Research Institute | Ichihashi Y.,Universal Communication Research Institute | Senoh T.,Universal Communication Research Institute
Scientific Reports | Year: 2014

In electronic holography, various methods have been considered for using multiple spatial light modulators (SLM) to increase the image size. In a previous work, we used a monochrome light source for a method that located an optical system containing lens arrays and other components in front of multiple SLMs. This paper proposes a colourization technique for that system based on time division multiplexing using laser light sources of three colours (red, green, and blue). The experimental device we constructed was able to perform video playback (20â€...fps) in colour of full parallax holographic three-dimensional (3D) images with an image size of 63â€...mm and a viewing-zone angle of 5.6 degrees without losing any part of the 3D image.


Callan D.E.,Osaka University | Callan D.E.,Universal Communication Research Institute | Durantin G.,Institut Universitaire de France | Terzibas C.,Universal Communication Research Institute
Frontiers in Systems Neuroscience | Year: 2015

Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces. © 2015 Callan, Durantin and Terzibas.


Sakano Y.,Universal Communication Research Institute | Allison R.S.,York University | Howard I.P.,York University
Journal of Vision | Year: 2012

We examined whether a negative motion aftereffect occurs in the depth direction following adaptation to motion in depth based on changing disparity and/or interocular velocity differences. To dissociate these cues, we used three types of adapters: random-element stereograms that were correlated (1) temporally and binocularly, (2) temporally but not binocularly, and (3) binocularly but not temporally. Only the temporally correlated adapters contained coherent interocular velocity differences while only the binocularly correlated adapters contained coherent changing disparity. A motion aftereffect in depth occurred after adaptation to the temporally correlated stereograms while little or no aftereffect occurred following adaptation to the temporally uncorrelated stereograms. Interestingly, a monocular test pattern also showed a comparable motion aftereffect in a diagonal direction in depth after adaptation to the temporally correlated stereograms. The lack of the aftereffect following adaptation to pure changing disparity was also confirmed using spatially separated random-dot patterns. These results are consistent with the existence of a mechanism sensitive to interocular velocity differences, which is adaptable (at least in part) at binocular stages of motion-in-depth processing. We did not find any evidence for the existence of an "adaptable" mechanism specialized to see motion in depth based on changing disparity. © ARVO.


Ichihashi Y.,Universal Communication Research Institute | Oi R.,Universal Communication Research Institute | Senoh T.,Universal Communication Research Institute | Yamamoto K.,Universal Communication Research Institute | Kurita T.,Universal Communication Research Institute
Optics Express | Year: 2012

We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second. © 2012 Optical Society of America.


Yamamoto K.,Universal Communication Research Institute | Ichihashi Y.,Universal Communication Research Institute | Senoh T.,Universal Communication Research Institute | Oi R.,Universal Communication Research Institute | Kurita T.,Universal Communication Research Institute
Optics Express | Year: 2012

One problem in electronic holography, which is caused by the display performance of spatial light modulators (SLM), is that the size of reconstructed 3D objects is small. Although methods for increasing the size using multiple SLMs have been considered, they typically had the problem that some parts of 3D objects were missing as a result of the gap between adjacent SLMs or 3D objects lost the vertical parallax. This paper proposes a method of resolving this problem by locating an optical system containing a lens array and other components in front of multiple SLMs. We used an optical system and 9 SLMs to construct a device equivalent to an SLM with approximately 74,600,000 pixels and used this to reconstruct 3D objects in both the horizontal and vertical parallax with an image size of 63 mm without losing any part of 3D objects. © 2012 Optical Society of America.


Yoshida S.,Universal Communication Research Institute
Optics Express | Year: 2016

A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. © 2016 Optical Society of America.


Nishimura R.,Universal Communication Research Institute
IEEE Transactions on Audio, Speech and Language Processing | Year: 2012

Based on the spatial masking phenomenon and Ambisonics, a watermarking technique for audio signals is proposed. Ambisonics is a well known technique to encode and reproduce spatial information related to sound. The proposed method exploits this feature of Ambisonics. A watermark is represented as a slightly rotated version of the original sound scene. In another scenario in which Ambisonic signals are synthesized from mono or stereo signals, watermarks are embedded near the signal by adding a small copy of the host signal. This scenario presents the important advantage that reversible watermarking is possible if loudspeakers are arranged properly for playback. This advantage is attributed to the fact that first order Ambisonics represents audio signals as mutually orthogonal four-channel signals, thereby presenting a larger amount of data than in the original version. Formulation of the proposed method is explained. Listening tests demonstrate the relations between primary parameter values and imperceptibility of the watermark. Computer simulations conducted to assess the robustness of the proposed method against common signal processing attacks demonstrate that the method is robust against most of the tested attacks. © 2012 IEEE.


Yoshida S.,Universal Communication Research Institute
Proceedings - IEEE Virtual Reality | Year: 2012

This paper proposes an approach to create appropriate virtual 3-D scenes to be shared with many people on a novel glasses-free table-top 3-D display. For maintaining appropriate omni-directional observation, the virtual objects must be situated in a certain volume. We first analyze and parameterize the volume derived from the geometrical configurations of the 3-D display. Then we simplify the volume as the primitive shape of a sphere. The volume visualizes the restrictions in the 3-D scene creation and helps content creators concentrate on their creative work using their favorite tools. © 2012 IEEE.

Loading Universal Communication Research Institute collaborators
Loading Universal Communication Research Institute collaborators