Time filter

Source Type

Zhang Y.-X.,Medical Research Council Institute of Hearing Research | Zhang Y.-X.,Beijing Normal University | Barry J.G.,Medical Research Council Institute of Hearing Research | Moore D.R.,Medical Research Council Institute of Hearing Research | Amitay S.,Medical Research Council Institute of Hearing Research
PLoS ONE | Year: 2012

Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception. © 2012 Zhang et al.


PubMed | Medical Research Council Institute of Hearing Research and University of Salamanca
Type: | Journal: Journal of visualized experiments : JoVE | Year: 2016

Differences in the activity of neurotransmitters and neuromodulators, and consequently different neural responses, can be found between anesthetized and awake animals. Therefore, methods allowing the manipulation of synaptic systems in awake animals are required in order to determine the contribution of synaptic inputs to neuronal processing unaffected by anesthetics. Here, we present methodology for the construction of electrodes to simultaneously record extracellular neural activity and release multiple neuroactive substances at the vicinity of the recording sites in awake mice. By combining these procedures, we performed microiontophoretic injections of gabazine to selectively block GABAA receptors in neurons of the inferior colliculus of head-restrained mice. Gabazine successfully modified neural response properties such as the frequency response area and stimulus-specific adaptation. Thus, we demonstrate that our methods are suitable for recording single-unit activity and for dissecting the role of specific neurotransmitter receptors in auditory processing. The main limitation of the described procedure is the relatively short recording time (~3 hr), which is determined by the level of habituation of the animal to the recording sessions. On the other hand, multiple recording sessions can be performed in the same animal. The advantage of this technique over other experimental procedures used to manipulate the level of neurotransmission or neuromodulation (such as systemic injections or the use of optogenetic models), is that the drug effect is confined to the local synaptic inputs to the target neuron. In addition, the custom-manufacture of electrodes allows adjustment of specific parameters according to the neural structure and type of neuron of interest (such as the tip resistance for improving the signal-to-noise ratio of the recordings).


Ferguson M.A.,National Health Research Institute | Henshaw H.,National Health Research Institute | Clark D.P.A.,National Health Research Institute | Moore D.R.,National Health Research Institute | And 2 more authors.
Ear and Hearing | Year: 2014

OBJECTIVES: The aims of this study were to (i) evaluate the efficacy of phoneme discrimination training for hearing and cognitive abilities of adults aged 50 to 74 years with mild sensorineural hearing loss who were not users of hearing aids, and to (ii) determine participant compliance with a self-administered, computer-delivered, home- and game-based auditory training program. DESIGN: This study was a randomized controlled trial with repeated measures and crossover design. Participants were trained and tested over an 8- to 12-week period. One group (Immediate Training) trained during weeks 1 and 4. A second waitlist group (Delayed Training) did no training during weeks 1 and 4, but then trained during weeks 5 and 8. On-task (phoneme discrimination) and transferable outcome measures (speech perception, cognition, self-report of hearing disability) for both groups were obtained during weeks 0, 4, and 8, and for the Delayed Training group only at week 12. RESULTS: Robust phoneme discrimination learning was found for both groups, with the largest improvements in threshold shown for those with the poorest initial thresholds. Between weeks 1 and 4, the Immediate Training group showed moderate, significant improvements on self-report of hearing disability, divided attention, and working memory, specifically for conditions or situations that were more complex and therefore more challenging. Training did not result in consistent improvements in speech perception in noise. There was no evidence of any test-retest effects between weeks 1 and 4 for the Delayed Training group. Retention of benefit at 4 weeks post-training was shown for phoneme discrimination, divided attention, working memory, and self-report of hearing disability. Improved divided attention and reduced self-reported hearing difficulties were highly correlated. CONCLUSIONS: It was observed that phoneme discrimination training benefits some but not all people with mild hearing loss. Evidence presented here, together with that of other studies that used different training stimuli, suggests that auditory training may facilitate cognitive skills that index executive function and the self-perception of hearing difficulty in challenging situations. The development of cognitive skills may be more important than the development of sensory skills for improving communication and speech perception in everyday life. However, improvements were modest. Outcome measures need to be appropriately challenging to be sensitive to the effects of the relatively small amount of training performed. Copyright © 2014 by Lippincott Williams & Wilkins.


PubMed | Medical Research Council Institute of Hearing Research and National Health Research Institute
Type: | Journal: Frontiers in psychology | Year: 2016

Good speech perception and communication skills in everyday life are crucial for participation and well-being, and are therefore an overarching aim of auditory rehabilitation. Both behavioral and self-report measures can be used to assess these skills. However, correlations between behavioral and self-report speech perception measures are often low. One possible explanation is that there is a mismatch between the specific situations used in the assessment of these skills in each method, and a more careful matching across situations might improve consistency of results. The role that cognition plays in specific speech situations may also be important for understanding communication, as speech perception tests vary in their cognitive demands. In this study, the role of executive function, working memory (WM) and attention in behavioral and self-report measures of speech perception was investigated. Thirty existing hearing aid users with mild-to-moderate hearing loss aged between 50 and 74 years completed a behavioral test battery with speech perception tests ranging from phoneme discrimination in modulated noise (easy) to words in multi-talker babble (medium) and keyword perception in a carrier sentence against a distractor voice (difficult). In addition, a self-report measure of aided communication, residual disability from the Glasgow Hearing Aid Benefit Profile, was obtained. Correlations between speech perception tests and self-report measures were higher when specific speech situations across both were matched. Cognition correlated with behavioral speech perception test results but not with self-report. Only the most difficult speech perception test, keyword perception in a carrier sentence with a competing distractor voice, engaged executive functions in addition to WM. In conclusion, any relationship between behavioral and self-report speech perception is not mediated by a shared correlation with cognition.


Maidment D.W.,Medical Research Council Institute of Hearing Research | Kang H.,Medical Research Council Institute of Hearing Research | Kang H.,Ecole Normale Superieure de Paris | Gill E.C.,Medical Research Council Institute of Hearing Research | Amitay S.,Medical Research Council Institute of Hearing Research
PLoS ONE | Year: 2015

Learning is considered to consist of two distinct phases-acquisition and consolidation. Acquisition can be disrupted when short periods of training on more than one task are interleaved, whereas consolidation can be disrupted when a second task is trained after the first has been initiated. Here we investigated the conditions governing the disruption to acquisition and consolidation during mixed-training regimens in which primary and secondary amplitude modulation tasks were either interleaved or presented consecutively. The secondary task differed from the primary task in either task-irrelevant (carrier frequency) or task-relevant (modulation rate) stimulus features while requiring the same perceptual judgment (amplitude modulation depth discrimination), or shared both irrelevant and relevant features but required a different judgment (amplitude modulation rate discrimination). Based on previous literature we predicted that acquisition would be disrupted by varying the task-relevant stimulus feature during training (stimulus interference), and that consolidation would be disrupted by varying the perceptual judgment required (task interference). We found that varying the task-relevant or -irrelevant stimulus features failed to disrupt acquisition but did disrupt consolidation, whereas mixing two tasks requiring a different perceptual judgment but sharing the same stimulus features disrupted both acquisition and consolidation. Thus, a distinction between acquisition and consolidation phases of perceptual learning cannot simply be attributed to (task-relevant) stimulus versus task interference. We propose instead that disruption occurs during acquisition when mixing two tasks requiring a perceptual judgment based on different cues, whereas consolidation is always disrupted regardless of whether different stimulus features or tasks are mixed. The current study not only provides a novel insight into the underlying mechanisms of perceptual learning, but also has practical implications for the optimal design and delivery of training programs that aim to remediate perceptual difficulties. © 2015 Maidment et al.


Molloy K.,Medical Research Council Institute of Hearing Research | Moore D.R.,Medical Research Council Institute of Hearing Research | Sohoglu E.,Medical Research Council Institute of Hearing Research | Sohoglu E.,Medical Research Council Cognition and Brain science Unit | Amitay S.,Medical Research Council Institute of Hearing Research
PLoS ONE | Year: 2012

Background: The time course and outcome of perceptual learning can be affected by the length and distribution of practice, but the training regimen parameters that govern these effects have received little systematic study in the auditory domain. We asked whether there was a minimum requirement on the number of trials within a training session for learning to occur, whether there was a maximum limit beyond which additional trials became ineffective, and whether multiple training sessions provided benefit over a single session. Methodology/Principal Findings: We investigated the efficacy of different regimens that varied in the distribution of practice across training sessions and in the overall amount of practice received on a frequency discrimination task. While learning was relatively robust to variations in regimen, the group with the shortest training sessions (~8 min) had significantly faster learning in early stages of training than groups with longer sessions. In later stages, the group with the longest training sessions (>1 hr) showed slower learning than the other groups, suggesting overtraining. Between-session improvements were inversely correlated with performance; they were largest at the start of training and reduced as training progressed. In a second experiment we found no additional longer-term improvement in performance, retention, or transfer of learning for a group that trained over 4 sessions (~4 hr in total) relative to a group that trained for a single session (~1 hr). However, the mechanisms of learning differed; the single-session group continued to improve in the days following cessation of training, whereas the multi-session group showed no further improvement once training had ceased. Conclusions/Significance: Shorter training sessions were advantageous because they allowed for more latent, between-session and post-training learning to emerge. These findings suggest that efficient regimens should use short training sessions, and optimized spacing between sessions. © 2012 Molloy et al.


Barrett D.J.K.,University of Leicester | Zobay O.,Medical Research Council Institute of Hearing Research
PLoS ONE | Year: 2014

Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this 'dual-target cost' may be attributable to a limit in the number of targettemplates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two targettemplates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. © 2014 Barrett, Zobay.


PubMed | Medical Research Council Institute of Hearing Research
Type: Comparative Study | Journal: PloS one | Year: 2015

Learning is considered to consist of two distinct phases-acquisition and consolidation. Acquisition can be disrupted when short periods of training on more than one task are interleaved, whereas consolidation can be disrupted when a second task is trained after the first has been initiated. Here we investigated the conditions governing the disruption to acquisition and consolidation during mixed-training regimens in which primary and secondary amplitude modulation tasks were either interleaved or presented consecutively. The secondary task differed from the primary task in either task-irrelevant (carrier frequency) or task-relevant (modulation rate) stimulus features while requiring the same perceptual judgment (amplitude modulation depth discrimination), or shared both irrelevant and relevant features but required a different judgment (amplitude modulation rate discrimination). Based on previous literature we predicted that acquisition would be disrupted by varying the task-relevant stimulus feature during training (stimulus interference), and that consolidation would be disrupted by varying the perceptual judgment required (task interference). We found that varying the task-relevant or -irrelevant stimulus features failed to disrupt acquisition but did disrupt consolidation, whereas mixing two tasks requiring a different perceptual judgment but sharing the same stimulus features disrupted both acquisition and consolidation. Thus, a distinction between acquisition and consolidation phases of perceptual learning cannot simply be attributed to (task-relevant) stimulus versus task interference. We propose instead that disruption occurs during acquisition when mixing two tasks requiring a perceptual judgment based on different cues, whereas consolidation is always disrupted regardless of whether different stimulus features or tasks are mixed. The current study not only provides a novel insight into the underlying mechanisms of perceptual learning, but also has practical implications for the optimal design and delivery of training programs that aim to remediate perceptual difficulties.


PubMed | Medical Research Council Institute of Hearing Research and University of Leicester
Type: Journal Article | Journal: PloS one | Year: 2014

Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this dual-target cost may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target.


PubMed | Medical Research Council Institute of Hearing Research and Haifa University
Type: Journal Article | Journal: PloS one | Year: 2015

Previous studies suggest fundamental differences between the perceptual learning of speech and non-speech stimuli. One major difference is in the way variability in the training set affects learning and its generalization to untrained stimuli: training-set variability appears to facilitate speech learning, while slowing or altogether extinguishing non-speech auditory learning. We asked whether the reason for this apparent difference is a consequence of the very different methodologies used in speech and non-speech studies. We hypothesized that speech and non-speech training would result in a similar pattern of learning if they were trained using the same training regimen. We used a 2 (random vs. blocked pre- and post-testing) 2 (random vs. blocked training) 2 (speech vs. non-speech discrimination task) study design, yielding 8 training groups. A further 2 groups acted as untrained controls, tested with either random or blocked stimuli. The speech task required syllable discrimination along 4 minimal-pair continua (e.g., bee-dee), and the non-speech stimuli required duration discrimination around 4 base durations (e.g., 50 ms). Training and testing required listeners to pick the odd-one-out of three stimuli, two of which were the base duration or phoneme continuum endpoint and the third varied adaptively. Training was administered in 9 sessions of 640 trials each, spread over 4-8 weeks. Significant learning was only observed following speech training, with similar learning rates and full generalization regardless of whether training used random or blocked schedules. No learning was observed for duration discrimination with either training regimen. We therefore conclude that the two stimulus classes respond differently to the same training regimen. A reasonable interpretation of the findings is that speech is perceived categorically, enabling learning in either paradigm, while the different base durations are not well-enough differentiated to allow for categorization, resulting in disruption to learning.

Loading Medical Research Council Institute of Hearing Research collaborators
Loading Medical Research Council Institute of Hearing Research collaborators