Entity

Time filter

Source Type

Nottingham, United Kingdom

Mill R.W.,MRC Institute of Hearing Research | Bohm T.M.,Institute of Cognitive Neuroscience and Psychology | Bohm T.M.,Budapest University of Technology and Economics | Bendixen A.,University of Leipzig | And 3 more authors.
PLoS Computational Biology | Year: 2013

Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives-a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming. © 2013 Mill et al. Source


Moore D.R.,MRC Institute of Hearing Research
Developmental Psychobiology | Year: 2012

Presentations in this symposium are considered in the context of mechanisms of sensory plasticity, particularly in the auditory system. The auditory nervous system has two discrete end organs that are separately vulnerable to clinical and experimental injury, and brainstem processing mechanisms that are highly specialized for temporal, spectral, and spatial coding. These include giant axo-somatic synapses and neurons with spatially segregated bipolar dendrites, each innervated exclusively from one ear. This architecture allows exquisite control of afferent-target interactions, including known excitatory and inhibitory couplings, and consequently enhanced interpretation of data from other brain systems. For example, deafening can silence the auditory nerve, but has surprisingly little impact on normal brain function, as evidenced by the success of cochlear implants. The observed amplification from one brain region to another of degeneration following stroke or discrete brain lesions may thus be due to secondary rather than primary afferent consequences of the lesion. © 2012 Wiley Periodicals, Inc. Source


Stewart H.J.,MRC Institute of Hearing Research | Mcintosh R.D.,University of Edinburgh | Williams J.H.G.,University of Aberdeen
Autism Research | Year: 2013

Imitation is a potentially crucial aspect of social cognitive development. Although deficits in imitation ability have been widely demonstrated in autism spectrum disorder (ASD), the specificity and significance of the findings is unclear, due largely to methodological limitations. We developed a novel assessment of imitation ability, using objective movement parameters (path length and action duration) derived from a touch-sensitive tablet laptop during drawing actions on an identical tablet. By direct comparison of the kinematics of a model's actions with those of the participant who observed them, measures of imitation accuracy were obtained. By replaying the end-point of the movement as a spot on the screen, imitation accuracy was compared against a "ghost control" condition, with no human actor but only the end-point of the movement seen [object movement reenactment (OMR)]. Hence, demands of the control task were closely matched to the experimental task with respect to motor, memory, and attentional abilities. Adolescents with ASD showed poorer accuracy for copying object size and action duration on both the imitation and OMR tasks, but were significantly more impaired for imitation of object size. Our results provide evidence that some of the imitation deficit in ASD is specific to a self-other mapping problem, and cannot be explained by general factors such as memory, spatial reasoning, motor control, or attention, nor related to the social demands of the testing situation. © 2013 International Society for Autism Research, Wiley Periodicals, Inc. Source


Banai K.,Haifa University | Amitay S.,MRC Institute of Hearing Research
Vision Research | Year: 2012

Stimulus uncertainty produced by variations in a target stimulus to be detected or discriminated, impedes perceptual learning under some, but not all experimental conditions. To account for those discrepancies, it has been proposed that uncertainty is detrimental to learning when the interleaved stimuli or tasks are similar to each other but not when they are sufficiently distinct, or when it obstructs the downstream search required to gain access to fine-grained sensory information, as suggested by the Reverse Hierarchy Theory (RHT). The focus of the current review is on the effects of uncertainty on the perceptual learning of speech and non-speech auditory signals. Taken together, the findings from the auditory modality suggest that in addition to the accounts already described, uncertainty may contribute to learning when categorization of stimuli to phonological or acoustic categories is involved. Therefore, it appears that the differences reported between the learning of non-speech and speech-related parameters are not an outcome of inherent differences between those two domains, but rather due to the nature of the tasks often associated with those different stimuli. © 2012 Elsevier Ltd. Source


Grant
Agency: GTR | Branch: MRC | Program: | Phase: Intramural | Award Amount: 3.78M | Year: 2009

This programme aims to improve our understanding of the function and connectivity of the auditory system. Sound waves in the air are transformed by the exquisitely sensitive structures of the ear into fluid movements that activate sensory hair cells in the inner ear (cochlea) then communicate them to the brain where they can be analysed. Information is passed upwards from the cochlear nucleus, through a series of relay points, to the auditory cortex where sounds are consciously perceived. However, as well as this ascending path there is also an equally important descending path that modulates this ascending information dependent on context. We will study the interactions between these pathways both at the relay points and in the cortex using complex stimuli such as animal vocalizations. We will also study the consequences of hearing loss in creating imbalance in the two pathways and in inducing tinnitus. We will use an animal model of tinnitus to study the mechanisms that lead to tinnitus and to evaluate drug therapies that may reduce the auditory aspects of tinnitus.

Discover hidden collaborations