Entity

Time filter

Source Type

Nottingham, United Kingdom

Wiggins I.M.,National Health Research Institute | Wiggins I.M.,University of Nottingham | Hartley D.E.H.,National Health Research Institute | Hartley D.E.H.,University of Nottingham | Hartley D.E.H.,Institute of Hearing Research
PLoS ONE | Year: 2015

Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations. © 2015 Wiggins, Hartley. Source


Fullgrabe C.,Institute of Hearing Research
American Journal of Audiology | Year: 2013

Purpose: In this study, the author sought to investigate if and when the ability to process temporal-fine-structure (TFS) cues deteriorates with age in adults with audiometrically normal hearing sensitivity. Method: Using a cross-sectional design, the author assessed TFS sensitivity in 102 normal-hearing adults sampled from across the entire range of adulthood (ages 18-90 years), using 2 psychophysical tests for the assessment of within- and across-ear TFS processing. Results: Both types of TFS sensitivity (monaural and binaural) declined gradually starting in young adulthood, with the earliest significant deficit already apparent in early midlife (i.e., between ages 40 and 49 years). TFS sensitivity was not correlated with absolute sensitivity at the test frequency. Conclusions: Some suprathreshold auditory processing abilities decline throughout adulthood, even when an individual's peripheral hearing is clinically normal. These deficits are not captured by a conventional, audiometric hearing assessment but may contribute to the increasing difficulties with age to identify speech in noisy environments. From amethodological point of view, the existence of such age effects warrants the use of age-matched participant groups when comparing normal and impaired peripheral hearing. © American Speech-Language-Hearing Association. Source


Dawes P.,University of Manchester | Cruickshanks K.J.,University of Wisconsin - Madison | Moore D.R.,Cincinnati Childrens Hospital Medical Center | Edmondson-Jones M.,University of Nottingham | And 4 more authors.
JARO - Journal of the Association for Research in Otolaryngology | Year: 2014

The objective of this large population-based cross-sectional study was to evaluate the association between smoking, passive smoking, alcohol consumption, and hearing loss. The study sample was a subset of the UK Biobank Resource, 164,770 adults aged between 40 and 69 years who completed a speech-in-noise hearing test (the Digit Triplet Test). Hearing loss was defined as speech recognition in noise in the better ear poorer than 2 standard deviations below the mean with reference to young normally hearing listeners. In multiple logistic regression controlling for potential confounders, current smokers were more likely to have a hearing loss than non-smokers (odds ratio (OR) 1.15, 95 % confidence interval (CI) 1.09-1.21). Among non-smokers, those who reported passive exposure to tobacco smoke were more likely to have a hearing loss (OR 1.28, 95 %CI 1.21-1.35). For both smoking and passive smoking, there was evidence of a dose-response effect. Those who consume alcohol were less likely to have a hearing loss than lifetime teetotalers. The association was similar across three levels of consumption by volume of alcohol (lightest 25 %, OR 0.61, 95 %CI 0.57-0.65; middle 50 % OR 0.62, 95 %CI 0.58-0.66; heaviest 25 % OR 0.65, 95 %CI 0.61-0.70). The results suggest that lifestyle factors may moderate the risk of hearing loss. Alcohol consumption was associated with a protective effect. Quitting or reducing smoking and avoiding passive exposure to tobacco smoke may also help prevent or moderate age-related hearing loss. © 2014 Association for Research in Otolaryngology. Source


Stacey P.C.,Nottingham Trent University | Murphy T.,Nottingham Trent University | Sumner C.J.,Institute of Hearing Research | Kitterick P.T.,National Health Research Institute | Roberts K.L.,University of Warwick
Journal of Experimental Psychology: Human Perception and Performance | Year: 2014

Previous research (e.g., McGurk & MacDonald, 1976) suggests that faces and voices are bound automatically, but recent evidence suggests that attention is involved in a task of searching for a talking face (Alsius & Soto-Faraco, 2011). We hypothesized that the processing demands of the stimuli may affect the amount of attentional resources required, and investigated what effect degrading the auditory stimulus had on the time taken to locate a talking face. Twenty participants were presented with between 2 and 4 faces articulating different sentences, and had to decide which of these faces matched the sentence that they heard. The results showed that in the least demanding auditory condition (clear speech in quiet), search times did not significantly increase when the number of faces increased. However, when speech was presented in background noise or was processed to simulate the information provided by a cochlear implant, search times increased as the number of faces increased. Thus, it seems that the amount of attentional resources required vary according to the processing demands of the auditory stimuli, and when processing load is increased then faces need to be individually attended to in order to complete the task. Based on these results we would expect cochlear-implant users to find the task of locating a talking face more attentionally demanding than normal hearing listeners. © 2014 American Psychological Association. Source


Banai K.,Haifa University | Amitay S.,Institute of Hearing Research
PLoS ONE | Year: 2015

Previous studies suggest fundamental differences between the perceptual learning of speech and non-speech stimuli. One major difference is in the way variability in the training set affects learning and its generalization to untrained stimuli: training-set variability appears to facilitate speech learning, while slowing or altogether extinguishing non-speech auditory learning.We asked whether the reason for this apparent difference is a consequence of the very different methodologies used in speech and non-speech studies. We hypothesized that speech and non-speech training would result in a similar pattern of learning if they were trained using the same training regimen. We used a 2 (random vs. blocked pre- and posttesting) × 2 (random vs. blocked training) × 2 (speech vs. non-speech discrimination task) study design, yielding 8 training groups. A further 2 groups acted as untrained controls, tested with either random or blocked stimuli. The speech task required syllable discrimination along 4 minimal-pair continua (e.g., bee-dee), and the non-speech stimuli required duration discrimination around 4 base durations (e.g., 50 ms). Training and testing required listeners to pick the odd-one-out of three stimuli, two of which were the base duration or phoneme continuum endpoint and the third varied adaptively. Training was administered in 9 sessions of 640 trials each, spread over 4-8 weeks. Significant learning was only observed following speech training, with similar learning rates and full generalization regardless of whether training used random or blocked schedules. No learning was observed for duration discrimination with either training regimen. We therefore conclude that the two stimulus classes respond differently to the same training regimen. A reasonable interpretation of the findings is that speech is perceived categorically, enabling learning in either paradigm, while the different base durations are not well-enough differentiated to allow for categorization, resulting in disruption to learning. Copyright: © 2015 Banai Amitay. Source

Discover hidden collaborations