Nottingham, United Kingdom
Nottingham, United Kingdom

Time filter

Source Type

Stacey P.C.,Nottingham Trent University | Murphy T.,Nottingham Trent University | Sumner C.J.,Institute of Hearing Research | Kitterick P.T.,National Health Research Institute | Roberts K.L.,University of Warwick
Journal of Experimental Psychology: Human Perception and Performance | Year: 2014

Previous research (e.g., McGurk & MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius & Soto-Faraco, 2011). We hypothesized that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners. © 2014 American Psychological Association.


Grimsley J.M.S.,Institute of Hearing Research | Grimsley J.M.S.,Northeast Ohio Medical University | Shanbhag S.J.,Northeast Ohio Medical University | Palmer A.R.,Institute of Hearing Research | Wallace M.N.,Institute of Hearing Research
PLoS ONE | Year: 2012

Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig. © 2012 Grimsley et al.


Dawes P.,University of Manchester | Cruickshanks K.J.,University of Wisconsin - Madison | Moore D.R.,Cincinnati Childrens Hospital Medical Center | Edmondson-Jones M.,University of Nottingham | And 4 more authors.
JARO - Journal of the Association for Research in Otolaryngology | Year: 2014

The objective of this large population-based cross-sectional study was to evaluate the association between smoking, passive smoking, alcohol consumption, and hearing loss. The study sample was a subset of the UK Biobank Resource, 164,770 adults aged between 40 and 69 years who completed a speech-in-noise hearing test (the Digit Triplet Test). Hearing loss was defined as speech recognition in noise in the better ear poorer than 2 standard deviations below the mean with reference to young normally hearing listeners. In multiple logistic regression controlling for potential confounders, current smokers were more likely to have a hearing loss than non-smokers (odds ratio (OR) 1.15, 95 % confidence interval (CI) 1.09-1.21). Among non-smokers, those who reported passive exposure to tobacco smoke were more likely to have a hearing loss (OR 1.28, 95 %CI 1.21-1.35). For both smoking and passive smoking, there was evidence of a dose-response effect. Those who consume alcohol were less likely to have a hearing loss than lifetime teetotalers. The association was similar across three levels of consumption by volume of alcohol (lightest 25 %, OR 0.61, 95 %CI 0.57-0.65; middle 50 % OR 0.62, 95 %CI 0.58-0.66; heaviest 25 % OR 0.65, 95 %CI 0.61-0.70). The results suggest that lifestyle factors may moderate the risk of hearing loss. Alcohol consumption was associated with a protective effect. Quitting or reducing smoking and avoiding passive exposure to tobacco smoke may also help prevent or moderate age-related hearing loss. © 2014 Association for Research in Otolaryngology.


PubMed | University of Maryland University College, Vanderbilt University, U.S. National Institutes of Health and Institute of Hearing Research
Type: Journal Article | Journal: European journal of human genetics : EJHG | Year: 2016

Recent insight into the genetic bases for autism spectrum disorder, dyslexia, stuttering, and language disorders suggest that neurogenetic approaches may also reveal at least one etiology of auditory processing disorder (APD). A person with an APD typically has difficulty understanding speech in background noise despite having normal pure-tone hearing sensitivity. The estimated prevalence of APD may be as high as 10% in the pediatric population, yet the causes are unknown and have not been explored by molecular or genetic approaches. The aim of our study was to determine the heritability of frequency and temporal resolution for auditory signals and speech recognition in noise in 96 identical or fraternal twin pairs, aged 6-11 years. Measures of auditory processing (AP) of non-speech sounds included backward masking (temporal resolution), notched noise masking (spectral resolution), pure-tone frequency discrimination (temporal fine structure sensitivity), and nonsense syllable recognition in noise. We provide evidence of significant heritability, ranging from 0.32 to 0.74, for individual measures of these non-speech-based AP skills that are crucial for understanding spoken language. Identification of specific heritable AP traits such as these serve as a basis to pursue the genetic underpinnings of APD by identifying genetic variants associated with common AP disorders in children and adults.


Wiggins I.M.,National Health Research Institute | Wiggins I.M.,University of Nottingham | Hartley D.E.H.,National Health Research Institute | Hartley D.E.H.,University of Nottingham | Hartley D.E.H.,Institute of Hearing Research
PLoS ONE | Year: 2015

Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations. © 2015 Wiggins, Hartley.


Amitay S.,Institute of Hearing Research | Halliday L.,Institute of Hearing Research | Halliday L.,University College London | Taylor J.,Institute of Hearing Research | And 2 more authors.
PLoS ONE | Year: 2010

Background: Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Methodology/Principal Findings: Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. Conclusions/Significance: This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence. © 2010 Amitay et al.


Fullgrabe C.,Institute of Hearing Research
American Journal of Audiology | Year: 2013

Purpose: In this study, the author sought to investigate if and when the ability to process temporal-fine-structure (TFS) cues deteriorates with age in adults with audiometrically normal hearing sensitivity. Method: Using a cross-sectional design, the author assessed TFS sensitivity in 102 normal-hearing adults sampled from across the entire range of adulthood (ages 18-90 years), using 2 psychophysical tests for the assessment of within- and across-ear TFS processing. Results: Both types of TFS sensitivity (monaural and binaural) declined gradually starting in young adulthood, with the earliest significant deficit already apparent in early midlife (i.e., between ages 40 and 49 years). TFS sensitivity was not correlated with absolute sensitivity at the test frequency. Conclusions: Some suprathreshold auditory processing abilities decline throughout adulthood, even when an individual's peripheral hearing is clinically normal. These deficits are not captured by a conventional, audiometric hearing assessment but may contribute to the increasing difficulties with age to identify speech in noisy environments. From amethodological point of view, the existence of such age effects warrants the use of age-matched participant groups when comparing normal and impaired peripheral hearing. © American Speech-Language-Hearing Association.


Banai K.,Haifa University | Amitay S.,Institute of Hearing Research
PLoS ONE | Year: 2015

Previous studies suggest fundamental differences between the perceptual learning of speech and non-speech stimuli. One major difference is in the way variability in the training set affects learning and its generalization to untrained stimuli: training-set variability appears to facilitate speech learning, while slowing or altogether extinguishing non-speech auditory learning.We asked whether the reason for this apparent difference is a consequence of the very different methodologies used in speech and non-speech studies. We hypothesized that speech and non-speech training would result in a similar pattern of learning if they were trained using the same training regimen. We used a 2 (random vs. blocked pre- and posttesting) × 2 (random vs. blocked training) × 2 (speech vs. non-speech discrimination task) study design, yielding 8 training groups. A further 2 groups acted as untrained controls, tested with either random or blocked stimuli. The speech task required syllable discrimination along 4 minimal-pair continua (e.g., bee-dee), and the non-speech stimuli required duration discrimination around 4 base durations (e.g., 50 ms). Training and testing required listeners to pick the odd-one-out of three stimuli, two of which were the base duration or phoneme continuum endpoint and the third varied adaptively. Training was administered in 9 sessions of 640 trials each, spread over 4-8 weeks. Significant learning was only observed following speech training, with similar learning rates and full generalization regardless of whether training used random or blocked schedules. No learning was observed for duration discrimination with either training regimen. We therefore conclude that the two stimulus classes respond differently to the same training regimen. A reasonable interpretation of the findings is that speech is perceived categorically, enabling learning in either paradigm, while the different base durations are not well-enough differentiated to allow for categorization, resulting in disruption to learning. Copyright: © 2015 Banai Amitay.


Shan Q.,Institute of Hearing Research | Bullock D.,Institute of Hearing Research | Palmer A.R.,Institute of Hearing Research | Shackleton T.M.,Institute of Hearing Research
Proceedings - 2015 IEEE 17th International Conference on High Performance Computing and Communications, 2015 IEEE 7th International Symposium on Cyberspace Safety and Security and 2015 IEEE 12th International Conference on Embedded Software and Systems, HPCC-CSS-ICESS 2015 | Year: 2015

Physiological investigations of the structure and function of the normal auditory system are technologically demanding, requiring electronic devices for generating and processing acoustic signals and for detecting, capturing and processing neural responses. Controllers for such custom electronic devices in physiological laboratories must be robust, configurable and embed-able. Recent advances in embedded systems have provided such a flexible, low-cost and reconfigurable technology: the programmable system-on-chip (PSoC), which integrates a microcontroller with programmable analogue and digital functions in a single chip. This makes it an ideal technology to be embedded in experimental equipment to allow flexible computer control. Using PSoC technology it is possible for a single design of circuit board and firmware to control multiple applications such as attenuation, filtering and amplification. Application programs to control devices, log and process data can be developed using various software environments. In the current application, a single driver was developed for multiple software environments such as Matlab, BrainWare or Visual Studio. Here we describe an instantiation of an embedded PSoC based hardware solution with appropriate firmware that provides full accurate and precise control of a very complex and sophisticated auditory physiological laboratory that includes attenuators, low-pass filters and amplifiers. © 2015 IEEE.


Laudanski J.,University of Nottingham | Coombes S.,University of Nottingham | Palmer A.R.,Institute of Hearing Research | Sumner C.J.,Institute of Hearing Research
Journal of Neurophysiology | Year: 2010

We report evidence of mode-locking to the envelope of a periodic stimulus in chopper units of the ventral cochlear nucleus (VCN). Mode-locking is a generalized description of how responses in periodically forced nonlinear systems can be closely linked to the input envelope, while showing temporal patterns of higher order than seen during pure phase-locking. Re-analyzing a previously unpublished dataset in response to amplitude modulated tones, we find that of 55% of cells (6/11) demonstrated stochastic mode-locking in response to sinusoidally amplitude modulated (SAM) pure tones at 50% modulation depth. At 100% modulation depth SAM, most units (3/4) showed mode-locking. We use interspike interval (ISI) scattergrams to unravel the temporal structure present in chopper mode-locked responses. These responses compared well to a leaky integrate-and-fire model (LIF) model of chopper units. Thus the timing of spikes in chopper unit responses to periodic stimuli can be understood in terms of the complex dynamics of periodically forced nonlinear systems. A larger set of onset (33) and chopper units (24) of the VCN also shows mode-locked responses to steady-state vowels and cosine-phase harmonic complexes. However, while 80% of chopper responses to complex stimuli meet our criterion for the presence of mode-locking, only 40% of onset cells show similar complex-modes of spike patterns. We found a correlation between a unit's regularity and its tendency to display mode-locked spike trains as well as a correlation in the number of spikes per cycle and the presence of complex-modes of spike patterns. These spiking patterns are sensitive to the envelope as well as the fundamental frequency of complex sounds, suggesting that complex cell dynamics may play a role in encoding periodic stimuli and envelopes in the VCN. Copyright © 2010 The American Physiological Society.

Loading Institute of Hearing Research collaborators
Loading Institute of Hearing Research collaborators