Time filter

Source Type

Suied C.,Institute Of Recherche Biomedicale Des Armees | Agus T.R.,Queen's University of Belfast | Thorpe S.J.,University Paul Sabatier | Mesgarani N.,University of California at San Francisco | Pressnitzer D.,CNRS Perceptual Systems Laboratory
Journal of the Acoustical Society of America | Year: 2014

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones. © 2014 Acoustical Society of America.

Spilioti M.,University of Aberdeen | Vargesson N.,University of Aberdeen | Neri P.,CNRS Perceptual Systems Laboratory
Vision Research | Year: 2016

All sensory devices, whether biological or artificial, carry appreciable amounts of intrinsic noise. When these internally generated perturbations are sufficiently large, the behaviour of the system is not solely driven by the external stimulus but also by its own spontaneous variability. Behavioural internal noise can be quantified, provided it is expressed in relative units of the noise source externally applied by the stimulus. In humans performing sensory tasks at near threshold performance, the size of internal noise is roughly equivalent to the size of the response fluctuations induced by the external noise source. It is not known how the human estimate compares with other animals, because behavioural internal noise has never been measured in other species. We have adapted the methodology used with humans to the zebrafish, a small teleost that displays robust visually-guided behaviour. Our measurements demonstrate that, under some conditions, it is possible to obtain viable estimates of internal noise in this vertebrate species; the estimates generally fall within the human range, suggesting that the properties of internal noise may reflect general constraints on stimulus–response coupling that apply across animal systems with substantially different characteristics. © 2016

De Cheveigne A.,CNRS Perceptual Systems Laboratory | De Cheveigne A.,Mohammed V University | De Cheveigne A.,University College London | Parra L.C.,City College of New York
NeuroImage | Year: 2014

We review a simple yet versatile approach for the analysis of multichannel data, focusing in particular on brain signals measured with EEG, MEG, ECoG, LFP or optical imaging. Sensors are combined linearly with weights that are chosen to provide optimal signal-to-noise ratio. Signal and noise can be variably defined to match the specific need, e.g. reproducibility over trials, frequency content, or differences between stimulus conditions. We demonstrate how the method can be used to remove power line or cardiac interference, enhance stimulus-evoked or stimulus-induced activity, isolate narrow-band cortical activity, and so on. The approach involves decorrelating both the original and filtered data by joint diagonalization of their covariance matrices. We trace its origins; offer an easy-to-understand explanation; review a range of applications; and chart failure scenarios that might lead to misleading results, in particular due to overfitting. In addition to its flexibility and effectiveness, a major appeal of the method is that it is easy to understand. © 2014 Elsevier Inc.

Lisi M.,CNRS Perceptual Systems Laboratory | Cavanagh P.,CNRS Perceptual Systems Laboratory | Zorzi M.,University of Padua | Zorzi M.,IRCCS San Camillo Neurorehabilitation Hospital
Attention, Perception, and Psychophysics | Year: 2015

Recent studies have shown that attentional facilitation lingers at the retinotopic coordinates of a previously attended position after an eye movement. These results are intriguing, because the retinotopic location becomes behaviorally irrelevant once the eyes have moved. Critically, in these studies participants were asked to maintain attention on a blank location of the screen. In the present study, we examined whether the continuing presence of a visual object at the cued location could affect the allocation of attention across eye movements. We used a trans-saccadic cueing paradigm in which the relevant positions could be defined or not by visual objects (simple square outlines). We find an attentional benefit at the spatiotopic location of the cue only when the object (the placeholder) has been continuously present at that location. We conclude that the presence of an object at the attended location is a critical factor for the maintenance of spatial constancy of attention across eye movements, a finding that helps to reconcile previous conflicting results. © 2015, The Psychonomic Society, Inc.

Leger A.C.,Massachusetts Institute of Technology | Ives D.T.,CNRS Perceptual Systems Laboratory | Lorenzi C.,CNRS Perceptual Systems Laboratory
Hearing Research | Year: 2014

The ability to identify syllables in the presence of speech-shaped noise and a single-talker background was measured for 18 normal-hearing (NH) listeners, and for eight hearing-impaired (HI) listeners with near-normal audiometric thresholds for frequencies up to 1.5kHz and a moderate to severe hearing loss above 2kHz. The stimulus components were restricted to the low-frequency (≤1.5kHz) region, where audiometric thresholds were classified clinically as normal or near normal for all listeners. Syllable identification in a speech background was measured as a function of the fundamental-frequency (F0) difference between competing voices (ranging from 1 semitone to ~1 octave). HI listeners had poorer syllable intelligibility than NH listeners in all conditions. Intelligibility decreased by about the same amount for both groups when the F0 difference between competing voices was reduced. The results suggest that the ability to identify speech against noise or an interfering talker was disrupted in frequency regions of near-normal hearing for HI listeners, but that the ability to benefit from the tested F0 differences was not disrupted. This deficit was not predicted by the elevated absolute thresholds for speech in speech, but it was for speech in noise. It may result from supra-threshold auditory deficits associated with aging. © 2014.

Lisi M.,CNRS Perceptual Systems Laboratory | Cavanagh P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2015

Visual processing in the human brain provides the data both for perception and for guiding motor actions. It seems natural that our actions would be directed toward perceived locations of their targets, but it has been proposed that action and perception rely on different visual information [1-4], and this provocative claim has triggered a long-lasting debate [5-7]. Here, in support of this claim, we report a large, robust dissociation between perception and action. We take advantage of a perceptual illusion in which visual motion signals presented within the boundaries of a peripheral moving object can make the object's apparent trajectory deviate by 45° or more from its physical trajectory [8-10], a shift several times larger than the typical discrimination threshold for motion direction [11]. Despite the large perceptual distortion, we found that saccadic eye movements directed to these moving objects clearly targeted locations along their physical rather than apparent trajectories. We show that the perceived trajectory is based on the accumulation of position error determined by prior sensory history - an accumulation of error that is not found for the action toward the same target. We suggest that visual processing for perception and action might diverge in how past information is combined with new visual input, with action relying only on immediate information to track a target, whereas perception builds on previous estimates to construct a conscious representation. © 2015 Elsevier Ltd All rights reserved.

Mamassian P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2015

Young children use multiple cues to appreciate the three-dimensional structure of the world. A new study reveals that these cues are properly integrated only years later, thus showing that sensory development is protracted well into teenage years. © 2015 Elsevier Ltd. All rights reserved.

Chambers C.,CNRS Perceptual Systems Laboratory | Chambers C.,Ecole Normale Superieure de Paris | Pressnitzer D.,CNRS Perceptual Systems Laboratory | Pressnitzer D.,Ecole Normale Superieure de Paris
Attention, Perception, and Psychophysics | Year: 2014

Perceptual hysteresis can be defined as the enduring influence of the recent past on current perception. Here, hysteresis was investigated in a basic auditory task: pitch comparisons between successive tones. On each trial, listeners were presented with pairs of tones and asked to report the direction of subjective pitch shift, as either "up" or "down." All tones were complexes known as Shepard tones (Shepard, 1964), which comprise several frequency components at octave multiples of a base frequency. The results showed that perceptual judgments were determined both by stimulus-related factors (the interval ratio between the base frequencies within a pair) and by recent context (the intervals in the two previous trials). When tones were presented in ordered sequences, for which the frequency interval between tones was varied in a progressive manner, strong hysteresis was found. In particular, ambiguous stimuli that led to equal probabilities of "up" and "down" responses within a randomized context were almost fully determined within an ordered context. Moreover, hysteresis did not act on the direction of the reported pitch shift, but rather on the perceptual representation of each tone. Thus, hysteresis could be observed within sequences in which listeners varied between "up" and "down" responses, enabling us to largely rule out confounds related to response bias. The strength of the perceptual hysteresis observed suggests that the ongoing context may have a substantial influence on fundamental aspects of auditory perception, such as how we perceive the changes in pitch between successive sounds. © 2014 Psychonomic Society, Inc.

Neri P.,CNRS Perceptual Systems Laboratory | Neri P.,Ecole Normale Superieure de Paris
PLoS Computational Biology | Year: 2015

It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. © 2015 Peter Neri.

Mamassian P.,CNRS Perceptual Systems Laboratory
Current Biology | Year: 2016

When we look straight at an object, it appears the same as when it is seen from the corner of our eye; however, this stability of visual appearance can collapse if inconspicuous object changes are introduced during eye movements. © 2016 Elsevier Ltd.

Loading CNRS Perceptual Systems Laboratory collaborators
Loading CNRS Perceptual Systems Laboratory collaborators