CNRS Perceptual Systems Laboratory

Paris, France

CNRS Perceptual Systems Laboratory

Paris, France
SEARCH FILTERS
Time filter
Source Type

Otto T.U.,University of St. Andrews | Mamassian P.,CNRS Perceptual Systems Laboratory
Multisensory Research | Year: 2017

The use of separate multisensory signals is often beneficial. A prominent example is the speed-up of responses to two redundant signals relative to the components, which is known as the redundant signals effect (RSE). A convenient explanation for the effect is statistical facilitation, which is inherent in the basic architecture of race models (Raab, 1962, Trans. N. Y. Acad. Sci. 24, 574-590). However, this class of models has been largely rejected in multisensory research, which we think results from an ambiguity in definitions and misinterpretations of the influential race model test (Miller, 1982, Cogn. Psychol. 14, 247-279). To resolve these issues, we here discuss four main items. First, we clarify definitions and ask how successful models of perceptual decision making can be extended from uni- to multisensory decisions. Second, we review the race model test and emphasize elements leading to confusion with its interpretation. Third, we introduce a new approach to study the RSE. As a major change of direction, our working hypothesis is that the basic race model architecture is correct even if the race model test seems to suggest otherwise. Based on this approach, we argue that understanding the variability of responses is the key to understand the RSE. Finally, we highlight the critical role of model testability to advance research on multisensory decisions. Despite being largely rejected, it should be recognized that race models, as part of a broader class of parallel decision models, demonstrate, in fact, a convincing explanatory power in a range of experimental paradigms. To improve research consistency in the future, we conclude with a short checklist for RSE studies. © 2017 T. U. Otto and P. Mamassian.


Neri P.,University of Aberdeen | Neri P.,CNRS Perceptual Systems Laboratory
Journal of Neuroscience | Year: 2014

In the early stages of image analysis, visual cortex represents scenes as spatially organized maps of locally defined features (e.g., edge orientation). As image reconstruction unfolds and features are assembled into larger constructs, cortex attempts to recover semantic content for object recognition. It is conceivable that higher level representations may feed back onto early processes and retune their properties to align with the semantic structure projected by the scene; however, there is no clear evidence to either support or discard the applicability of this notion to the human visual system. Obtaining such evidence is challenging because low and higher level processes must be probed simultaneously within the same experimental paradigm.Wedeveloped a methodology that targets both levels of analysis by embedding low-level probes within natural scenes. Human observers were required to discriminate probe orientation while semantic interpretation of the scene was selectively disrupted via stimulus inversion or reversed playback.Wecharacterized the orientation tuning properties of the perceptual process supporting probe discrimination; tuning was substantially reshaped by semantic manipulation, demonstrating that low-level feature detectors operate under partial control from higher level modules. The manner in which such control was exerted may be interpreted as a top-down predictive strategy whereby global semantic content guides and refines local image reconstruction. We exploit the novel information gained from data to develop mechanistic accounts of unexplained phenomena such as the classic face inversion effect. © 2014 the authors.


Helbling S.,Goethe University Frankfurt | Teki S.,University College London | Teki S.,CNRS Perceptual Systems Laboratory | Callaghan M.F.,University College London | And 6 more authors.
NeuroImage | Year: 2015

We present an approach for combining high resolution MRI-based myelin mapping with functional information from electroencephalography (EEG) or magnetoencephalography (MEG). The main contribution to the primary currents detectable with EEG and MEG comes from ionic currents in the apical dendrites of cortical pyramidal cells, aligned perpendicularly to the local cortical surface. We provide evidence from an in-vivo experiment that the variation in MRI-based myeloarchitecture measures across the cortex predicts the variation of the current density over individuals and thus is of functional relevance. Equivalent current dipole locations and moments due to pitch onset evoked response fields (ERFs) were estimated by means of a variational Bayesian algorithm. The myeloarchitecture was estimated indirectly from individual high resolution quantitative multi-parameter maps (MPMs) acquired at 800. μm isotropic resolution. Myelin estimates across cortical areas correlated positively with dipole magnitude. This correlation was spatially specific: regions of interest in the auditory cortex provided significantly better models than those covering whole hemispheres. Based on the MPM data we identified the auditory cortical area TE1.2 as the most likely origin of the pitch ERFs measured by MEG. We can now proceed to exploit the higher spatial resolution of quantitative MPMs to identify the cortical origin of M/EEG signals, inform M/EEG source reconstruction and explore structure-function relationships at a fine structural level in the living human brain. © 2014.


Paraouty N.,CNRS Perceptual Systems Laboratory | Lorenzi C.,CNRS Perceptual Systems Laboratory
Hearing Research | Year: 2017

This study used a correlational approach to clarify the mechanisms involved in modulation coding. Amplitude-modulation (AM) and frequency-modulation (FM) detection thresholds (AMDTs and FMDTs, respectively) were assessed for 70 normal-hearing listeners. In order to increase between-listeners variability in peripheral coding, participants with a wide range of age (20–70 years) were included. AMDTs and FMDTs were measured at a 5-Hz rate, using a 500-Hz sinusoidal carrier. FMDTs were also measured in the presence of an interfering AM to discourage the use of temporal-envelope cues. The results showed that AMDTs were significantly correlated with FMDTs, but not with FMDTs measured with interfering AM. FMDTs with and without interfering AM were significantly correlated with each other. This pattern of correlation proved to be robust, providing additional evidence that for low carrier frequencies, (i) low-rate AM and FM detection is based on a common code using temporal-envelope cues and (ii) low-rate FM detection is based on an additional code using cues distinct from temporal-envelope. The analyses also showed that age was correlated with FMDTs only. However, no significant difference was found when comparing the various correlations with age. Hence, the effects of age on modulation sensitivity remain unclear. © 2016 Elsevier B.V.


De Cheveigne A.,CNRS Perceptual Systems Laboratory | De Cheveigne A.,Mohammed V University | De Cheveigne A.,University College London | Parra L.C.,City College of New York
NeuroImage | Year: 2014

We review a simple yet versatile approach for the analysis of multichannel data, focusing in particular on brain signals measured with EEG, MEG, ECoG, LFP or optical imaging. Sensors are combined linearly with weights that are chosen to provide optimal signal-to-noise ratio. Signal and noise can be variably defined to match the specific need, e.g. reproducibility over trials, frequency content, or differences between stimulus conditions. We demonstrate how the method can be used to remove power line or cardiac interference, enhance stimulus-evoked or stimulus-induced activity, isolate narrow-band cortical activity, and so on. The approach involves decorrelating both the original and filtered data by joint diagonalization of their covariance matrices. We trace its origins; offer an easy-to-understand explanation; review a range of applications; and chart failure scenarios that might lead to misleading results, in particular due to overfitting. In addition to its flexibility and effectiveness, a major appeal of the method is that it is easy to understand. © 2014 Elsevier Inc.


Lisi M.,CNRS Perceptual Systems Laboratory | Cavanagh P.,CNRS Perceptual Systems Laboratory | Zorzi M.,University of Padua | Zorzi M.,IRCCS San Camillo Neurorehabilitation Hospital
Attention, Perception, and Psychophysics | Year: 2015

Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results. © 2015, The Psychonomic Society, Inc.


Lisi M.,CNRS Perceptual Systems Laboratory | Cavanagh P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2015

Visual processing in the human brain provides the data both for perception and for guiding motor actions. It seems natural that our actions would be directed toward perceived locations of their targets, but it has been proposed that action and perception rely on different visual information [1-4], and this provocative claim has triggered a long-lasting debate [5-7]. Here, in support of this claim, we report a large, robust dissociation between perception and action. We take advantage of a perceptual illusion in which visual motion signals presented within the boundaries of a peripheral moving object can make the object's apparent trajectory deviate by 45° or more from its physical trajectory [8-10], a shift several times larger than the typical discrimination threshold for motion direction [11]. Despite the large perceptual distortion, we found that saccadic eye movements directed to these moving objects clearly targeted locations along their physical rather than apparent trajectories. We show that the perceived trajectory is based on the accumulation of position error determined by prior sensory history - an accumulation of error that is not found for the action toward the same target. We suggest that visual processing for perception and action might diverge in how past information is combined with new visual input, with action relying only on immediate information to track a target, whereas perception builds on previous estimates to construct a conscious representation. © 2015 Elsevier Ltd All rights reserved.


Mamassian P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2015

Young children use multiple cues to appreciate the three-dimensional structure of the world. A new study reveals that these cues are properly integrated only years later, thus showing that sensory development is protracted well into teenage years. © 2015 Elsevier Ltd. All rights reserved.


Neri P.,CNRS Perceptual Systems Laboratory | Neri P.,Ecole Normale Superieure de Paris
PLoS Computational Biology | Year: 2015

It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. © 2015 Peter Neri.


Mamassian P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2016

When we look straight at an object, it appears the same as when it is seen from the corner of our eye; however, this stability of visual appearance can collapse if inconspicuous object changes are introduced during eye movements. © 2016 Elsevier Ltd.

Loading CNRS Perceptual Systems Laboratory collaborators
Loading CNRS Perceptual Systems Laboratory collaborators