Time filter

Source Type

Sakano Y.,Universal Communication Research Institute | Allison R.S.,York University | Howard I.P.,York University
Journal of Vision | Year: 2012

We examined whether a negative motion aftereffect occurs in the depth direction following adaptation to motion in depth based on changing disparity and/or interocular velocity differences. To dissociate these cues, we used three types of adapters: random-element stereograms that were correlated (1) temporally and binocularly, (2) temporally but not binocularly, and (3) binocularly but not temporally. Only the temporally correlated adapters contained coherent interocular velocity differences while only the binocularly correlated adapters contained coherent changing disparity. A motion aftereffect in depth occurred after adaptation to the temporally correlated stereograms while little or no aftereffect occurred following adaptation to the temporally uncorrelated stereograms. Interestingly, a monocular test pattern also showed a comparable motion aftereffect in a diagonal direction in depth after adaptation to the temporally correlated stereograms. The lack of the aftereffect following adaptation to pure changing disparity was also confirmed using spatially separated random-dot patterns. These results are consistent with the existence of a mechanism sensitive to interocular velocity differences, which is adaptable (at least in part) at binocular stages of motion-in-depth processing. We did not find any evidence for the existence of an "adaptable" mechanism specialized to see motion in depth based on changing disparity. © ARVO.

Yoshida S.,Universal Communication Research Institute
Optics Express | Year: 2016

A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present. © 2016 Optical Society of America.

Yoshida S.,Universal Communication Research Institute
Proceedings - IEEE Virtual Reality | Year: 2012

This paper proposes an approach to create appropriate virtual 3-D scenes to be shared with many people on a novel glasses-free table-top 3-D display. For maintaining appropriate omni-directional observation, the virtual objects must be situated in a certain volume. We first analyze and parameterize the volume derived from the geometrical configurations of the 3-D display. Then we simplify the volume as the primitive shape of a sphere. The volume visualizes the restrictions in the 3-D scene creation and helps content creators concentrate on their creative work using their favorite tools. © 2012 IEEE.

Murata M.,Tottori University | Utiyama M.,Universal Communication Research Institute
ICIC Express Letters, Part B: Applications | Year: 2012

We proposed a new method of using definitions for segmenting compound words and extracting word constituents and the relationship between them. Using this method, we extracted approximately 8,000 pairs of word constituents and their probable relationships. The data obtained had an accuracy rate between 97% and 99%. The data obtained is useful for examining word constituents. To show the usefulness of the data, we conducted a study using semantic features. The relationships we obtained were described by linguistic expressions, and thus, similar relationships were treated differently. To solve this problem, we grouped similar relationships using clustering algorithms. © 2012 ICIC International.

Callan D.E.,Osaka University | Callan D.E.,Universal Communication Research Institute | Naito E.,Osaka University
Cognitive and Behavioral Neurology | Year: 2014

This commentary builds on a companion article in which Kim et al compare brain activation in elite, expert, and novice archers during a simulated target aiming task (Kim et al. 2014. Cogn Behav Neurol. 27:173-182). With the archery study as our starting point, we address 4 neural processes that may be responsible in general for elite athletes' superior performance over experts and novices: neural efficiency, cortical expansion, specialized processes, and internal models. In Kim et al's study, the elite archers' brains showed more activity in the supplementary motor area and the cerebellum than those of the novices and experts, and showed minimal widespread activity, especially in frontal areas involved with executive control. Kim et al's results are consistent with the idea of specialized neural processes that help coordinate motor planning and control. As athletes become more skilled, these processes may mediate the reduction in widespread activity in regions mapping executive control, and may produce a shift toward more automated processing. Kim et al's finding that activity in the cerebellum rose with increasing skill is consistent both with expansion of the finger representational area in the cerebellum and with internal models that simulate how archers manipulate the bow and arrow when aiming. Kim et al prepare the way for testing of neuromodulation techniques to improve athletic performance, refine highly technical job skills, and rehabilitate patients. © 2014 by Lippincott Williams and Wilkins.

Discover hidden collaborations