Music and Sound Research BRAMS

Montréal, Canada

Music and Sound Research BRAMS

Montréal, Canada

Time filter

Source Type

Zendel B.R.,Music and Sound Research BRAMS | Zendel B.R.,Institute University Of Geriatrie Of Montrealqc | Zendel B.R.,Center for Research on Brain | Zendel B.R.,University Of Montrealqc | And 7 more authors.
Journal of Cognitive Neuroscience | Year: 2015

Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise. © 2015 Massachusetts Institute of Technology.

Lehmann A.,Music and Sound Research BRAMS | Lehmann A.,University of Montréal | Lehmann A.,McGill University | Skoe E.,University of Connecticut | And 5 more authors.
European Journal of Neuroscience | Year: 2015

Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

Gosselin N.,Music and Sound Research BRAMS | Gosselin N.,University of Montréal | Gosselin N.,University of Quebec at Trois - Rivieres | Paquette S.,Music and Sound Research BRAMS | And 3 more authors.
Cortex | Year: 2015

The emotional experience elicited by music is largely dependent on structural characteristics such as pitch, rhythm, and dynamics. We examine here to what extent amusic adults, who have experienced pitch perception difficulties all their lives, still maintain some ability to perceive emotions from music. Amusic and control participants judged the emotions expressed by unfamiliar musical clips intended to convey happiness, sadness, fear and peacefulness (Experiment 1A). Surprisingly, most amusic individuals showed normal recognition of the four emotions tested here. This preserved ability was not due to some peculiarities of the music, since the amusic individuals showed a typical deficit in perceiving pitch violations intentionally inserted in the same clips (Experiment 1B). In Experiment 2, we tested the use of two major structural determinants of musical emotions: tempo and mode. Neutralization of tempo had the same effect on both amusics' and controls' emotional ratings. In contrast, amusics did not respond to a change of mode as markedly as controls did. Moreover, unlike the control participants, amusics' judgments were not influenced by subtle differences in pitch, such as the number of semitones changed by the mode manipulation. Instead, amusics showed normal sensitivity to fluctuations in energy, to pulse clarity, and to timbre differences, such as roughness. Amusics even showed sensitivity to key clarity and to large mean pitch differences in distinguishing happy from sad music. Thus, the pitch perception deficit experienced by amusic adults had only mild consequences on emotional judgments. In sum, emotional responses to music may be possible in this condition. © 2015 Elsevier Ltd.

Nozaradan S.,Catholic University of Louvain | Nozaradan S.,Music and Sound Research BRAMS
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2014

The ability to perceive a regular beat in music and synchronize to this beat is a widespread human skill. Fundamental to musical behaviour, beat and meter refer to the perception of periodicities while listening to musical rhythms and often involve spontaneous entrainment to move on these periodicities. Here, we present a novel experimental approach inspired by the frequency-tagging approach to understand the perception and production of rhythmic inputs. This approach is illustrated here by recording the human electroencephalogram responses at beat and meter frequencies elicited in various contexts: mental imagery of meter, spontaneous induction of a beat from rhythmic patterns, multisensory integration and sensorimotor synchronization. Collectively, our observations support the view that entrainment and resonance phenomena subtend the processing of musical rhythms in the human brain. More generally, they highlight the potential of this approach to help us understand the link between the phenomenology of musical beat and meter and the bias towards periodicities arising under certain circumstances in the nervous system. Entrainment to music provides a highly valuable framework to explore general entrainment mechanisms as embodied in the human brain. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

Voss P.,Montreal Neurological Institute | Voss P.,Music and Sound Research BRAMS | Tabry V.,Concordia University at Montréal | Zatorre R.J.,Montreal Neurological Institute | Zatorre R.J.,Music and Sound Research BRAMS
Journal of Neuroscience | Year: 2015

There is substantial evidence that sensory deprivation leads to important cross-modal brain reorganization that is paralleled by enhanced perceptual abilities. However, it remains unclear how widespread these enhancements are, and whether they are intercorrelated or arise at the expense of other perceptual abilities. One specific area where such a trade-off might arise is that of spatial hearing, where blind individuals have been shown to possess superior monaural localization abilities in the horizontal plane, but inferior localization abilities in the vertical plane. While both of these tasks likely involve the use of monaural cues due to the absence of any relevant binaural signal, there is currently no proper explanation for this discrepancy, nor has any study investigated both sets of abilities in the same sample of blind individuals. Here, we assess whether the enhancements observed in the horizontal plane are related to the deficits observed in the vertical plane by testing sound localization in both planes in groups of blind and sighted persons. Our results show that the blind individuals who displayed the highest accuracy at localizing sounds monaurally in the horizontal plane are also the ones who exhibited the greater deficit when localizing in the vertical plane. These findings appear to argue against the idea of generalized perceptual enhancements in the early blind, and instead suggest the possibility of a trade-off in the localization proficiency between the two auditory spatial planes, such that learning to use monaural cues for the horizontal plane comes at the expense of using those cues to localize in the vertical plane. © 2015 the authors.

Hutchins S.,Music and Sound Research BRAMS | Hutchins S.,University of Montréal | Zarate J.M.,Music and Sound Research BRAMS | Zarate J.M.,Montreal Neurological Institute | And 4 more authors.
Journal of the Acoustical Society of America | Year: 2010

Vocal pitch matching is a foundational skill for singing and is an interesting place to study the relationship between pitch perception and production. To better understand this relationship, we assessed pitch-matching abilities in congenital amusics, who have documented disabilities in pitch perception, and in matched controls under normal, masked, and guided feedback conditions. Their vocal productions were analyzed for fundamental frequency and showed that amusics were significantly less accurate at pitch matching than the controls. However, five of the six amusics showed a significant correlation between their produced pitches and the target pitch. Feedback condition had no effect on pitch-matching accuracy. These results show impaired vocal pitch-matching abilities in amusics but also show a relationship between perceived and produced pitches. © 2010 Acoustical Society of America.

Dalla Bella S.,Montpellier University | Dalla Bella S.,Institut Universitaire de France | Dalla Bella S.,University of Warsaw | Dalla Bella S.,Music and Sound Research BRAMS | Sowinski J.,University of Warsaw
Journal of Visualized Experiments | Year: 2015

A set of behavioral tasks for assessing perceptual and sensorimotor timing abilities in the general population (i.e., non-musicians) is presented here with the goal of uncovering rhythm disorders, such as beat deafness. Beat deafness is characterized by poor performance in perceiving durations in auditory rhythmic patterns or poor synchronization of movement with auditory rhythms (e.g., with musical beats). These tasks include the synchronization of finger tapping to the beat of simple and complex auditory stimuli and the detection of rhythmic irregularities (anisochrony detection task) embedded in the same stimuli. These tests, which are easy to administer, include an assessment of both perceptual and sensorimotor timing abilities under different conditions (e.g., beat rates and types of auditory material) and are based on the same auditory stimuli, ranging from a simple metronome to a complex musical excerpt. The analysis of synchronized tapping data is performed with circular statistics, which provide reliable measures of synchronization accuracy (e.g., the difference between the timing of the taps and the timing of the pacing stimuli) and consistency. Circular statistics on tapping data are particularly well-suited for detecting individual differences in the general population. Synchronized tapping and anisochrony detection are sensitive measures for identifying profiles of rhythm disorders and have been used with success to uncover cases of poor synchronization with spared perceptual timing. This systematic assessment of perceptual and sensorimotor timing can be extended to populations of patients with brain damage, neurodegenerative diseases (e.g., Parkinson’s disease), and developmental disorders (e.g., Attention Deficit Hyperactivity Disorder). © 2015 Journal of Visualized Experiments.

Peretz I.,Music and Sound Research BRAMS | Peretz I.,University of Montréal | Vuvan D.,Music and Sound Research BRAMS | Vuvan D.,University of Montréal | And 4 more authors.
Philosophical Transactions of the Royal Society B: Biological Sciences | Year: 2015

Neural overlap in processing music and speech, as measured by the coactivation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability betweenmusic and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

Voss P.,Montreal Neurological Institute | Voss P.,University of Montréal | Voss P.,Music and Sound Research BRAMS | Zatorre R.J.,Montreal Neurological Institute | And 2 more authors.
Current Biology | Year: 2012

The scientific literature has grown rich in research illustrating the remarkable ability of the brain to reorganize itself following sensory loss. In particular, visually deafferented regions within the occipital cortex of early blind individuals have been repeatedly shown to be functionally recruited to carry out a wide variety of nonvisual tasks. While the novelty of such a finding might be wearing off, more recent research has begun to examine whether this crossmodal takeover of the occipital cortex in blindness follows some sort of organizational principle. Here we first review the most recent evidence from neuroimaging studies that illustrate how the pre-existing functional specialization of cortical sub-regions appears to be preserved following sensory deprivation. We discuss and compare work on visual and auditory deprivation, as well as research on individuals with intact sensory systems. We suggest avenues for future exploration of these issues, such as identifying the neuroanatomical markers of crossmodal plasticity and elucidating the behavioral relevance of observed changes. © 2012 Elsevier Ltd.

Lehmann A.,Music and Sound Research BRAMS | Lehmann A.,University of Montréal | Lehmann A.,Center for Research on Brain | Schonwiesner M.,Music and Sound Research BRAMS | And 3 more authors.
PLoS ONE | Year: 2014

Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequencyfollowing responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues. © 2014 Lehmann et al.

Loading Music and Sound Research BRAMS collaborators
Loading Music and Sound Research BRAMS collaborators