Time filter

Source Type

de Lussanet M.H.E.,University of Munster | de Lussanet M.H.E.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience | Behrendt F.,University of Munster | Puta C.,Friedrich - Schiller University of Jena | And 6 more authors.
Human Movement Science | Year: 2013

Visually presented biological motion stimuli activate regions in the brain that are also related to musculo-skeletal pain. We therefore hypothesized that chronic pain impairs the perception of visually presented actions that involve body parts that hurt. In the first experiment, chronic back pain (CLBP) patients and healthy controls judged the lifted weight from point-light biological motion displays. An actor either lifted an invisible container (5, 10, or 15. kg) from the floor, or lifted and manipulated it from the right to the left. The latter involved twisting of the lower back and would be very painful for CLBP patients. All participants recognized the displayed actions, but CLBP patients were impaired in judging the difference in handled weights, especially for the trunk rotation. The second experiment involved discrimination between forward and backward walking. Here the patients were just as good as the controls, showing that the main result of the first experiment was indeed specific to the sensory aspects of the task, and not to general impairments or attentional deficits. The results thus indicate that the judgment of sensorimotor aspects of a visually displayed movement is specifically affected by chronic low back pain. © 2013 Elsevier B.V.

Hamker F.H.,University of Munster | Hamker F.H.,TU Chemnitz | Hamker F.H.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience | Zirnsak M.,University of Munster | And 5 more authors.
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2011

Perceptual phenomena that occur around the time of a saccade, such as peri-saccadic mislocalization or saccadic suppression of displacement, have often been linked to mechanisms of spatial stability. These phenomena are usually regarded as errors in processes of trans-saccadic spatial transformations and they provide important tools to study these processes. However, a true understanding of the underlying brain processes that participate in the preparation for a saccade and in the transfer of information across it requires a closer, more quantitative approach that links different perceptual phenomena with each other and with the functional requirements of ensuring spatial stability. We review a number of computational models of peri-saccadic spatial perception that provide steps in that direction. Although most models are concerned with only specific phenomena, some generalization and interconnection between them can be obtained from a comparison. Our analysis shows how different perceptual effects can coherently be brought together and linked back to neuronal mechanisms on the way to explaining vision across saccades. This journal is © 2011 The Royal Society.

Zirnsak M.,University of Munster | Lappe M.,University of Munster | Lappe M.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience | Hamker F.H.,University of Munster | And 2 more authors.
Vision Research | Year: 2010

At the time of an impending saccade receptive fields (RFs) undergo dynamic changes, that is, their spatial profile is altered. This phenomenon has been observed in several monkey visual areas. Although their link to eye movements is obvious, neither the exact pattern nor their function is fully clear. Several RF shifts have been interpreted in terms of predictive remapping mediating visual stability. In particular, even prior to saccade onset some cells become responsive to stimuli presented in their future, post-saccadic RF. In visual area V4, however, the overall effect of RF dynamics consists of a shrinkage and shift of RFs towards the saccade target. These observations have been linked to a pre-saccadically enhanced processing of the future fixation. In order to better understand these seemingly different outcomes, we analyzed the RF shifts predicted by a recently proposed computational model of peri-saccadic perception (Hamker, Zirnsak, Calow, & Lappe, 2008). This model unifies peri-saccadic compression, pre-saccadic attention shifts, and peri-saccadic receptive field dynamics in a common framework of oculomotor reentry signals in extrastriate visual cortical maps. According to the simulations that we present in the current paper, a spatially selective oculomotor feedback signal leads to RF dynamics which are both consistent with the observations made in studies aiming to investigate predictive remapping and saccade target shifts. Thus, the seemingly distinct experimental observations could be grounded in the same neural mechanism leading to different RF dynamics dependent on the location of the RF in visual space. © 2010 Elsevier Ltd.

Bostrom K.J.,University of Munster | Bostrom K.J.,Center for Nonlinear Science | Wagner H.,University of Munster | Wagner H.,Center for Nonlinear Science | And 5 more authors.
Human Movement Science | Year: 2013

Using recent recurrent network architecture based on the reservoir computing approach, we propose and numerically simulate a model that is focused on the aspects of a flexible motor memory for the storage of elementary movement patterns into the synaptic weights of a neural network, so that the patterns can be retrieved at any time by simple static commands. The resulting motor memory is flexible in that it is capable to continuously modulate the stored patterns. The modulation consists in an approximately linear inter- and extrapolation, generating a large space of possible movements that have not been learned before. A recurrent network of thousand neurons is trained in a manner that corresponds to a realistic exercising scenario, with experimentally measured muscular activations and with kinetic data representing proprioceptive feedback. The network is "self-active" in that it maintains recurrent flow of activation even in the absence of input, a feature that resembles the "resting-state activity" found in the human and animal brain. The model involves the concept of "neural outsourcing" which amounts to the permanent shifting of computational load from higher to lower-level neural structures, which might help to explain why humans are able to execute learned skills in a fluent and flexible manner without the need for attention to the details of the movement. © 2013 Elsevier B.V.

Hirschfeld G.,University of Munster | Hirschfeld G.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience | Zwitserlood P.,University of Munster | Zwitserlood P.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience
Brain Research | Year: 2011

Effects of language comprehension on visual processing have been extensively studied within the embodied-language framework. However, it is unknown whether these effects are caused by passive repetition suppression in visual processing areas, or depend on active feedback, based on partial input, from prefrontal regions. Based on a model of top-down feedback during visual recognition, we predicted diminished effects when low-spatial frequencies were removed from targets. We compared low-pass and high-pass filtered pictures in a sentence-picture-verification task. Target pictures matched or mismatched the implied shape of an object mentioned in a preceding sentence, or were unrelated to the sentences. As predicted, there was a large match advantage when the targets contained low-spatial frequencies, but no effect of linguistic context when these frequencies were filtered out. The proposed top-down feedback model is superior to repetition suppression in explaining the current results, as well as earlier results about the lateralization of this effect, and peculiar color match effects. We discuss these findings in the context of recent general proposals of prediction and top-down feedback. © 2010 Elsevier B.V. All rights reserved.

Loading Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience collaborators
Loading Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience collaborators