CNRS Laboratory of Physiology of Perception

Paris, France

CNRS Laboratory of Physiology of Perception

Paris, France
SEARCH FILTERS
Time filter
Source Type

Waszak F.,University of Paris Descartes | Waszak F.,CNRS Laboratory of Physiology of Perception | Cardoso-Leite P.,Max Planck Institute for Human Cognitive and Brain Sciences | Hughes G.,University of Paris Descartes | Hughes G.,CNRS Laboratory of Physiology of Perception
Neuroscience and Biobehavioral Reviews | Year: 2012

Voluntary actions are thought to be selected with respect to their intended goal. Converging data suggests that medial frontal cortex plays a crucial role in linking actions to their predicted effects. Recent neuroimaging data also suggests that during action selection, the brain pre-activities the representation of the predicted action effect. We review evidence of action effect prediction, both in terms of its neurophysiological basis as well as its functional consequences. By assuming that action preparation includes activation of the predicted sensory consequences of the action, we provide a mechanism to understand sensory attenuation and intentional binding. In this account, sensory attenuation results from more difficult discrimination between the observed action effect and the pre-activation of the predicted effect, as compared to when no (or incorrect) prediction is present. Similarly, a predicted action effect should also reach the threshold of awareness faster (intentional binding), if its perceptual representation is pre-activated. By comparing this potential mechanism to mental imagery and repetition suppression we propose a possible neural basis for the processing of predicted action effects. © 2011 Elsevier Ltd.


Stormer V.S.,Harvard University | Alvarez G.A.,Harvard University | Cavanagh P.,Harvard University | Cavanagh P.,CNRS Laboratory of Physiology of Perception
Journal of Neuroscience | Year: 2014

It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. © 2014 the authors.


Hughes G.,CNRS Laboratory of Physiology of Perception | Waszak F.,CNRS Laboratory of Physiology of Perception
NeuroImage | Year: 2011

Sensory attenuation of voluntary action effects has been widely reported in both somatosensory and auditory domains. However, relatively little research has focused on physiological measures of sensory attenuation of visual action effects. One previous study found, perhaps surprisingly, that both auditory and visual sensory attenuation were manifested as decreased ERP amplitude over the vertex. The present study aimed to extend these findings using a novel paradigm in which voluntary actions were either associated with a visual action effect or to no effect. Crucially, this allowed us to explore both sensory attenuation (by comparing ERPs to action-triggered versus externally triggered stimuli) and action effect prediction (by comparing actions that triggered a stimulus with actions that did not). With regard to sensory attenuation, we found that attenuation of cortical responses to visual action effects was manifested in a reduced activation of a frontoparietal network, from 150 ms after stimulus. Differences between actions that produced an effect and those that did not were observed in lateralized motor potentials and may reflect the cortical correlates of the action effect prediction. We also observed a re-activation of lateralized motor activity following onset of the action effect, suggesting a common representation of action effects in visual and motor cortices. Taken together, these findings help to elucidate the cortical mechanisms of voluntary action as well as their sensory consequences and inform how our interaction with the external world is processed and controlled. © 2011 Elsevier Inc.


Collins T.,CNRS Laboratory of Physiology of Perception
Attention, Perception, and Psychophysics | Year: 2016

Each eye movement introduces changes in the retinal location of objects. How a stable spatiotopic representation emerges from such variable input is an important question for the study of vision. Researchers have classically probed human observers' performance in a task requiring a location judgment about an object presented at different locations across a saccade. Correct performance on this task requires realigning or remapping retinal locations to compensate for the saccade. A recent study showed that performance improved with longer presaccadic viewing time, suggesting that accurate spatiotopic representations take time to build up. The first goal of the study was to replicate that finding. Two experiments, one an exact replication and the second a modified version, failed to replicate improved performance with longer presaccadic viewing time. The second goal of this study was to examine the role of attention in constructing spatiotopic representations, as theoretical and neurophysiological accounts of remapping have proposed that only attended targets are remapped. A third experiment thus manipulated attention with a spatial cueing paradigm and compared transsaccadic location performance of attended versus unattended targets. No difference in spatiotopic performance was found between attended and unattended targets. Although only negative results are reported, they might nevertheless suggest that spatiotopic representations are relatively stable over time. © 2016, The Psychonomic Society, Inc.


Hojen A.,University of Southern Denmark | Nazzi T.,University of Paris Descartes | Nazzi T.,CNRS Laboratory of Physiology of Perception
Developmental Science | Year: 2016

The present study explored whether the phonological bias favoring consonants found in French-learning infants and children when learning new words (Havy & Nazzi, 2009; Nazzi, 2005) is language-general, as proposed by Nespor, Peña and Mehler (2003), or varies across languages, perhaps as a function of the phonological or lexical properties of the language in acquisition. To do so, we used the interactive word-learning task set up by Havy and Nazzi (2009), teaching Danish-learning 20-month-olds pairs of phonetically similar words that contrasted either on one of their consonants or one of their vowels, by either one or two phonological features. Danish was chosen because it has more vowels than consonants, and is characterized by extensive consonant lenition. Both phenomena could disfavor a consonant bias. Evidence of word-learning was found only for vocalic information, irrespective of whether one or two phonological features were changed. The implication of these findings is that the phonological biases found in early lexical processing are not language-general but develop during language acquisition, depending on the phonological or lexical properties of the native language. © 2016 John Wiley & Sons Ltd.


Gervain J.,CNRS Laboratory of Physiology of Perception | Gervain J.,University of Paris Descartes | Berent I.,Northeastern University | Werker J.F.,Columbia University
Journal of Cognitive Neuroscience | Year: 2012

Breaking the linguistic code requires the extraction of at least two types of information from the speech signal: the relations between linguistic units and their sequential position. Furthermore, these different types of information need to be integrated into a coherent representation of language structure. The brain networks responsible for these abilities are well known in adults, but not in young infants. Our results show that the neural architecture underlying these abilities is operational at birth. In three optical imaging studies, we found that the newborn brain detects identity relations, as evidenced by enhanced activation in the bilateral superior temporal and left inferior frontal regions. More importantly, the newborn brain can also determine whether such identity relations hold for the initial or final positions of speech sequences, as indicated by increased activity in the inferior frontal regions, possibly Broca's area. This implies that the neural foundations of language acquisition are in place from birth. © 2012 Massachusetts Institute of Technology.


Henson R.N.,MRC Cognition and Brain science Unit | Eckstein D.,University of Bern | Waszak F.,University of Paris Descartes | Waszak F.,CNRS Laboratory of Physiology of Perception | And 2 more authors.
Trends in Cognitive Sciences | Year: 2014

People can rapidly form arbitrary associations between stimuli and the responses they make in the presence of those stimuli. Such stimulus-response (S-R) bindings, when retrieved, affect the way that people respond to the same, or related, stimuli. Only recently, however, has the flexibility and ubiquity of these S-R bindings been appreciated, particularly in the context of priming paradigms. This is important for the many cognitive theories that appeal to evidence from priming. It is also important for the control of action generally. An S-R binding is more than a gradually learned association between a specific stimulus and a specific response; instead, it captures the full, context-dependent behavioral potential of a stimulus. © 2014 The Authors.


Chopin A.,University of Paris Descartes | Chopin A.,CNRS Laboratory of Physiology of Perception | Mamassian P.,University of Paris Descartes | Mamassian P.,CNRS Laboratory of Physiology of Perception
Current Biology | Year: 2012

What humans perceive depends in part on what they have previously experienced [1, 2]. After repeated exposure to one stimulus, adaptation takes place in the form of a negative correlation between the current percept and the last displayed stimuli [3-10]. Previous work has shown that this negative dependence can extend to a few minutes in the past [5, 11, 12], but the precise extent and nature of the dependence in vision is still unknown. In two experiments based on orientation judgments, we reveal a positive dependence of a visual percept with stimuli presented remotely in the past, unexpectedly and in contrast to what is known for the recent past. Previous theories of adaptation have postulated that the visual system attempts to calibrate itself relative to an ideal norm [13, 14] or to the recent past [5, 7, 10, 15, 16]. We propose instead that the remote past is used to estimate the world's statistics and that this estimate becomes the reference. According to this new framework, adaptation is predictive: the most likely forthcoming percept is the one that helps the statistics of the most recent percepts match that of the remote past. © 2012 Elsevier Ltd.


Yeung H.H.,CNRS Laboratory of Physiology of Perception | Nazzi T.,CNRS Laboratory of Physiology of Perception
Cognition | Year: 2014

Different kinds of speech sounds are used to signify possible word forms in every language. For example, lexical stress is used in Spanish (/'be.be/, '. he/she drinks' versus /be.'be/, '. baby'), but not in French (/'be.be/ and /be.'be/ both mean '. baby'). Infants learn many such native language phonetic contrasts in their first year of life, likely using a number of cues from parental speech input. One such cue could be parents' object labeling, which can explicitly highlight relevant contrasts. Here we ask whether phonetic learning from object labeling is abstract-that is, if learning can generalize to new phonetic contexts. We investigate this issue in the prosodic domain, as the abstraction of prosodic cues (like lexical stress) has been shown to be particularly difficult. One group of 10-month-old French-learners was given consistent word labels that contrasted on lexical stress (e.g., Object A was labeled /'ma.bu/, and Object B was labeled /ma.'bu/). Another group of 10-month-olds was given inconsistent word labels (i.e., mixed pairings), and stress discrimination in both groups was measured in a test phase with words made up of new syllables. Infants trained with consistently contrastive labels showed an earlier effect of discrimination compared to infants trained with inconsistent labels. Results indicate that phonetic learning from object labeling can indeed generalize, and suggest one way infants may learn the sound properties of their native language(s). © 2014 Elsevier B.V.


Agus T.R.,CNRS Laboratory of Physiology of Perception
The Journal of the Acoustical Society of America | Year: 2012

Human listeners seem to have an impressive ability to recognize a wide variety of natural sounds. However, there is surprisingly little quantitative evidence to characterize this fundamental ability. Here the speed and accuracy of musical-sound recognition were measured psychophysically with a rich but acoustically balanced stimulus set. The set comprised recordings of notes from musical instruments and sung vowels. In a first experiment, reaction times were collected for three target categories: voice, percussion, and strings. In a go/no-go task, listeners reacted as quickly as possible to members of a target category while withholding responses to distractors (a diverse set of musical instruments). Results showed near-perfect accuracy and fast reaction times, particularly for voices. In a second experiment, voices were recognized among strings and vice-versa. Again, reaction times to voices were faster. In a third experiment, auditory chimeras were created to retain only spectral or temporal features of the voice. Chimeras were recognized accurately, but not as quickly as natural voices. Altogether, the data suggest rapid and accurate neural mechanisms for musical-sound recognition based on selectivity to complex spectro-temporal signatures of sound sources.

Loading CNRS Laboratory of Physiology of Perception collaborators
Loading CNRS Laboratory of Physiology of Perception collaborators