Universal Communication Research Institute

Japan

Universal Communication Research Institute

Japan
SEARCH FILTERS
Time filter
Source Type

Sakano Y.,Universal Communication Research Institute | Allison R.S.,York University | Howard I.P.,York University
Journal of Vision | Year: 2012

We examined whether a negative motion aftereffect occurs in the depth direction following adaptation to motion in depth based on changing disparity and/or interocular velocity differences. To dissociate these cues, we used three types of adapters: random-element stereograms that were correlated (1) temporally and binocularly, (2) temporally but not binocularly, and (3) binocularly but not temporally. Only the temporally correlated adapters contained coherent interocular velocity differences while only the binocularly correlated adapters contained coherent changing disparity. A motion aftereffect in depth occurred after adaptation to the temporally correlated stereograms while little or no aftereffect occurred following adaptation to the temporally uncorrelated stereograms. Interestingly, a monocular test pattern also showed a comparable motion aftereffect in a diagonal direction in depth after adaptation to the temporally correlated stereograms. The lack of the aftereffect following adaptation to pure changing disparity was also confirmed using spatially separated random-dot patterns. These results are consistent with the existence of a mechanism sensitive to interocular velocity differences, which is adaptable (at least in part) at binocular stages of motion-in-depth processing. We did not find any evidence for the existence of an "adaptable" mechanism specialized to see motion in depth based on changing disparity. © ARVO.


Ichihashi Y.,Universal Communication Research Institute | Oi R.,Universal Communication Research Institute | Senoh T.,Universal Communication Research Institute | Yamamoto K.,Universal Communication Research Institute | Kurita T.,Universal Communication Research Institute
Optics Express | Year: 2012

We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second. © 2012 Optical Society of America.


Yamamoto K.,Universal Communication Research Institute | Ichihashi Y.,Universal Communication Research Institute | Senoh T.,Universal Communication Research Institute | Oi R.,Universal Communication Research Institute | Kurita T.,Universal Communication Research Institute
Optics Express | Year: 2012

One problem in electronic holography, which is caused by the display performance of spatial light modulators (SLM), is that the size of reconstructed 3D objects is small. Although methods for increasing the size using multiple SLMs have been considered, they typically had the problem that some parts of 3D objects were missing as a result of the gap between adjacent SLMs or 3D objects lost the vertical parallax. This paper proposes a method of resolving this problem by locating an optical system containing a lens array and other components in front of multiple SLMs. We used an optical system and 9 SLMs to construct a device equivalent to an SLM with approximately 74,600,000 pixels and used this to reconstruct 3D objects in both the horizontal and vertical parallax with an image size of 63 mm without losing any part of 3D objects. © 2012 Optical Society of America.


Yoshida S.,Universal Communication Research Institute
Optics Express | Year: 2016

A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. © 2016 Optical Society of America.


Nishimura R.,Universal Communication Research Institute
IEEE Transactions on Audio, Speech and Language Processing | Year: 2012

Based on the spatial masking phenomenon and Ambisonics, a watermarking technique for audio signals is proposed. Ambisonics is a well known technique to encode and reproduce spatial information related to sound. The proposed method exploits this feature of Ambisonics. A watermark is represented as a slightly rotated version of the original sound scene. In another scenario in which Ambisonic signals are synthesized from mono or stereo signals, watermarks are embedded near the signal by adding a small copy of the host signal. This scenario presents the important advantage that reversible watermarking is possible if loudspeakers are arranged properly for playback. This advantage is attributed to the fact that first order Ambisonics represents audio signals as mutually orthogonal four-channel signals, thereby presenting a larger amount of data than in the original version. Formulation of the proposed method is explained. Listening tests demonstrate the relations between primary parameter values and imperceptibility of the watermark. Computer simulations conducted to assess the robustness of the proposed method against common signal processing attacks demonstrate that the method is robust against most of the tested attacks. © 2012 IEEE.


Yoshida S.,Universal Communication Research Institute
Proceedings - IEEE Virtual Reality | Year: 2012

This paper proposes an approach to create appropriate virtual 3-D scenes to be shared with many people on a novel glasses-free table-top 3-D display. For maintaining appropriate omni-directional observation, the virtual objects must be situated in a certain volume. We first analyze and parameterize the volume derived from the geometrical configurations of the 3-D display. Then we simplify the volume as the primitive shape of a sphere. The volume visualizes the restrictions in the 3-D scene creation and helps content creators concentrate on their creative work using their favorite tools. © 2012 IEEE.


Kim K.-S.,Universal Communication Research Institute | Lee R.,Universal Communication Research Institute | Zettsu K.,Universal Communication Research Institute
GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems | Year: 2011

With being coupled with geographic location, microblogging messages have become more important information resources to share observations and opinions about the real world via social media. As a result, we are getting much interested in comprehending situations related to natural and/or social events using those messages. In the recent, certain works have focused on showing trending topics that can represent snapshots of certain situations with spatial and temporal contexts on the basis of geo-tagged microblogging messages. They, however, have difficulty in tracking and comparing topic changes and movements in a spatiotemporal domain because of the separated geo-spatial and temporal components. This demonstration introduces mTrend, which constructs and visualizes spatiotemporal trends of topics, named as "topic movements", on the basis of the geo-tagged Tweets. In particular, its interactive visual mining tool allows users to intuitively understand the differences between topic movements over space and time. In the demonstration, we present the comparisons of topic movements using a few keywords related to the Great East Japan Earthquake. © 2011 Authors.


Yoshida S.,Universal Communication Research Institute
14th Workshop on Information Optics, WIO 2015 | Year: 2015

Tables are widely used for various activities. We can share and exchange such media as documents or photos on their surfaces. fVisiOn, our proposed novel glasses-free tabletop 3D display, is designed to be a communication tool to augment ordinary tabletop activities by naturally providing virtual 3D media alongside actual media. This paper introduces the basic idea of our proposed method and recent improvements. © 2015 IEEE.


Wada A.,Universal Communication Research Institute | Wada A.,Osaka University | Sakano Y.,Universal Communication Research Institute | Sakano Y.,Osaka University | And 2 more authors.
NeuroImage | Year: 2014

Glossiness is the visual appearance of an object's surface as defined by its surface reflectance properties. Despite its ecological importance, little is known about the neural substrates underlying its perception. In this study, we performed the first human neuroimaging experiments that directly investigated where the processing of glossiness resides in the visual cortex. First, we investigated the cortical regions that were more activated by observing high glossiness compared with low glossiness, where the effects of simple luminance and luminance contrast were dissociated by controlling the illumination conditions (Experiment 1). As cortical regions that may be related to the processing of glossiness, V2, V3, hV4, VO-1, VO-2, collateral sulcus (CoS), LO-1, and V3A/B were identified, which also showed significant correlation with the perceived level of glossiness. This result is consistent with the recent monkey studies that identified selective neural response to glossiness in the ventral visual pathway, except for V3A/B in the dorsal visual pathway, whose involvement in the processing of glossiness could be specific to the human visual system. Second, we investigated the cortical regions that were modulated by selective attention to glossiness (Experiment 2). The visual areas that showed higher activation to attention to glossiness than that to either form or orientation were identified as right hV4, right VO-2, and right V3A/B, which were commonly identified in Experiment 1. The results indicate that these commonly identified visual areas in the human visual cortex may play important roles in glossiness perception. © 2014.


Callan D.E.,Osaka University | Callan D.E.,Universal Communication Research Institute | Naito E.,Osaka University
Cognitive and Behavioral Neurology | Year: 2014

This commentary builds on a companion article in which Kim et al compare brain activation in elite, expert, and novice archers during a simulated target aiming task (Kim et al. 2014. Cogn Behav Neurol. 27:173-182). With the archery study as our starting point, we address 4 neural processes that may be responsible in general for elite athletes' superior performance over experts and novices: neural efficiency, cortical expansion, specialized processes, and internal models. In Kim et al's study, the elite archers' brains showed more activity in the supplementary motor area and the cerebellum than those of the novices and experts, and showed minimal widespread activity, especially in frontal areas involved with executive control. Kim et al's results are consistent with the idea of specialized neural processes that help coordinate motor planning and control. As athletes become more skilled, these processes may mediate the reduction in widespread activity in regions mapping executive control, and may produce a shift toward more automated processing. Kim et al's finding that activity in the cerebellum rose with increasing skill is consistent both with expansion of the finger representational area in the cerebellum and with internal models that simulate how archers manipulate the bow and arrow when aiming. Kim et al prepare the way for testing of neuromodulation techniques to improve athletic performance, refine highly technical job skills, and rehabilitate patients. © 2014 by Lippincott Williams and Wilkins.

Loading Universal Communication Research Institute collaborators
Loading Universal Communication Research Institute collaborators