Time filter

Source Type

Lartseva A.,Radboud University Nijmegen | Lartseva A.,International Max Planck Research School for Language science | Dijkstra T.,Radboud University Nijmegen | Kan C.C.,Radboud University Nijmegen | And 2 more authors.
Journal of Autism and Developmental Disorders | Year: 2014

This study investigated processing of emotion words in autism spectrum disorders (ASD) using reaction times and event-related potentials (ERP). Adults with (n = 21) and without (n = 20) ASD performed a lexical decision task on emotion and neutral words while their brain activity was recorded. Both groups showed faster responses to emotion words compared to neutral, suggesting intact early processing of emotion in ASD. In the ERPs, the control group showed a typical late positive component (LPC) at 400–600 ms for emotion words compared to neutral, while the ASD group showed no LPC. The between-group difference in LPC amplitude was significant, suggesting that emotion words were processed differently by individuals with ASD, although their behavioral performance was similar to that of typical individuals. © 2014, Springer Science+Business Media New York.

Tsuji S.,Radboud University Nijmegen | Tsuji S.,International Max Planck Research School for Language science | Cristia A.,French School for Advanced Studies in the Social Sciences | Cristia A.,Max Planck Institute for Psycholinguistics
Developmental Psychobiology | Year: 2014

Although the majority of evidence on perceptual narrowing in speech sounds is based on consonants, most models of infant speech perception generalize these findings to vowels, assuming that vowel perception improves for vowel sounds that are present in the infant's native language within the first year of life, and deteriorates for non-native vowel sounds over the same period of time. The present meta-analysis contributes to assessing to what extent these descriptions are accurate in the first comprehensive quantitative meta-analysis of perceptual narrowing in infant vowel discrimination, including results from behavioral, electrophysiological, and neuroimaging methods applied to infants 0-14 months of age. An analysis of effect sizes for native and non-native vowel discrimination over the first year of life revealed that they changed with age in opposite directions, being significant by about 6 months of age. © 2013 Wiley Periodicals, Inc.

Dolscheid S.,Max Planck Institute for Psycholinguistics | Dolscheid S.,International Max Planck Research School for Language science | Dolscheid S.,University of Cologne | Hunnius S.,Radboud University Nijmegen | And 3 more authors.
Psychological Science | Year: 2014

People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be "high" or "low" (i.e., height-pitch association), whereas in other languages, pitches are described as "thin" or "thick" (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people's nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants' sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants. © The Author(s) 2014.

van de Velde M.,Max Planck Institute for Psycholinguistics | van de Velde M.,International Max Planck Research School for Language science | Meyer A.S.,Max Planck Institute for Psycholinguistics | Meyer A.S.,Donders Institute for Brain | And 2 more authors.
Journal of Memory and Language | Year: 2014

When formulating simple sentences to describe pictured events, speakers look at the referents they are describing in the order of mention. Accounts of incrementality in sentence production rely heavily on analyses of this gaze-speech link. To identify systematic sources of variability in message and sentence formulation, two experiments evaluated differences in formulation for sentences describing "easy" and "hard" events (more codable and less codable events) with preferred and dispreferred structures (actives and passives). Experiment 1 employed a subliminal cuing manipulation and a cumulative priming manipulation to increase production of passive sentences. Experiment 2 examined the influence of event codability on formulation without a cuing manipulation. In both experiments, speakers showed an early preference for looking at the agent of the event when constructing active sentences. This preference was attenuated by event codability, suggesting that speakers were less likely to prioritize encoding of a single character at the outset of formulation in "easy" events than in "harder" events. Accessibility of the agent influenced formulation primarily when an event was "harder" to describe. Formulation of passive sentences in Experiment 1 also began with early fixations to the agent but changed with exposure to passive syntax: speakers were more likely to consider the patient as a suitable sentential starting point after cumulative priming. The results show that the message-to-language mapping in production can vary with the ease of encoding an event structure and of generating a suitable linguistic structure. © 2013 Elsevier Inc.

Jongman S.R.,Max Planck Institute for Psycholinguistics | Jongman S.R.,International Max Planck Research School for Language science | Roelofs A.,Max Planck Institute for Psycholinguistics | Roelofs A.,Donders Institute for Brain | And 2 more authors.
Quarterly Journal of Experimental Psychology | Year: 2015

Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants’ sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers’ sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources. © 2014, © 2014 The Experimental Psychology Society.

Rommers J.,Max Planck Institute for Psycholinguistics | Rommers J.,International Max Planck Research School for Language science | Rommers J.,University of Illinois at Urbana - Champaign | Meyer A.S.,Max Planck Institute for Psycholinguistics | And 3 more authors.
Attention, Perception, and Psychophysics | Year: 2015

During language comprehension, listeners often anticipate upcoming information. This can draw listeners’ overt attention to visually presented objects before the objects are referred to. We investigated to what extent the anticipatory mechanisms involved in such language-mediated attention rely on specific verbal factors and on processes shared with other domains of cognition. Participants listened to sentences ending in a highly predictable word (e.g., “In 1969 Neil Armstrong was the first man to set foot on the moon”) while viewing displays containing three unrelated distractor objects and a critical object, which was either the target object (e.g., a moon), an object with a similar shape (e.g., a tomato), or an unrelated control object (e.g., rice). Language-mediated anticipatory eye movements were observed to targets and to shape competitors. Importantly, looks to the shape competitor were systematically related to individual differences in anticipatory attention, as indexed by a spatial cueing task: Participants whose responses were most strongly facilitated by predictive arrow cues also showed the strongest effects of predictive language input on their eye movements. By contrast, looks to the target were related to individual differences in vocabulary size and verbal fluency. The results suggest that verbal and nonverbal factors contribute to different types of language-mediated eye movements. The findings are consistent with multiple-mechanism accounts of predictive language processing. © 2015, The Psychonomic Society, Inc.

Roswandowitz C.,Max Planck Institute for Human Cognitive and Brain Sciences | Roswandowitz C.,International Max Planck Research School on Neuroscience of Communication | Mathias S.R.,Yale University | Hintz F.,Max Planck Institute for Psycholinguistics | And 5 more authors.
Current Biology | Year: 2014

Recognizing other individuals is an essential skill in humans and in other species [1-3]. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces [4]. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices [5], is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate. © 2014 Elsevier Ltd All rights reserved.

Smith A.C.,Max Planck Institute for Psycholinguistics | Smith A.C.,International Max Planck Research School for Language science | Monaghan P.,Lancaster University | Huettig F.,Max Planck Institute for Psycholinguistics | Huettig F.,Donders Institute for Brain
Cognitive Psychology | Year: 2014

Learning to read and write requires an individual to connect additional orthographic representations to pre-existing mappings between phonological and semantic representations of words. Past empirical results suggest that the process of learning to read and write (at least in alphabetic languages) elicits changes in the language processing system, by either increasing the cognitive efficiency of mapping between representations associated with a word, or by changing the granularity of phonological processing of spoken language, or through a combination of both. Behavioural effects of literacy have typically been assessed in offline explicit tasks that have addressed only phonological processing. However, a recent eye tracking study compared high and low literate participants on effects of phonology and semantics in processing measured implicitly using eye movements. High literates' eye movements were more affected by phonological overlap in online speech than low literates, with only subtle differences observed in semantics. We determined whether these effects were due to cognitive efficiency and/or granularity of speech processing in a multimodal model of speech processing - the amodal shared resource model (ASR, Smith, Monaghan, & Huettig, 2013a,b). We found that cognitive efficiency in the model had only a marginal effect on semantic processing and did not affect performance for phonological processing, whereas fine-grained versus coarse-grained phonological representations in the model simulated the high/low literacy effects on phonological processing, suggesting that literacy has a focused effect in changing the grain-size of phonological mappings. © 2014 Elsevier Inc.

Neger T.M.,Radboud University Nijmegen | Neger T.M.,International Max Planck Research School for Language science | Rietveld T.,Radboud University Nijmegen | Janse E.,Radboud University Nijmegen
Frontiers in Human Neuroscience | Year: 2014

Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly) draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech. In the present study, 73 older adults (aged over 60 years) and 60 younger adults (aged between 18 and 30 years) performed a visual artificial grammar learning task and were presented with 60 meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory, and processing speed). Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly. © 2014 Neger, Rietveld and Janse.

Piai V.,Radboud University Nijmegen | Piai V.,International Max Planck Research School for Language science | Roelofs A.,Radboud University Nijmegen | Van Der Meij R.,Radboud University Nijmegen
Brain Research | Year: 2012

Picture-word interference is a widely employed paradigm to investigate lexical access in word production: Speakers name pictures while trying to ignore superimposed distractor words. The distractor can be congruent to the picture (pictured cat, word cat), categorically related (pictured cat, word dog), or unrelated (pictured cat, word pen). Categorically related distractors slow down picture naming relative to unrelated distractors, the so-called semantic interference. Categorically related distractors slow down picture naming relative to congruent distractors, analogous to findings in the colour-word Stroop task. The locus of semantic interference and Stroop-like effects in naming performance has recently become a topic of debate. Whereas some researchers argue for a pre-lexical locus of semantic interference and a lexical locus of Stroop-like effects, others localise both effects at the lexical selection stage. We investigated the time course of semantic and Stroop-like interference effects in overt picture naming by means of event-related potentials (ERP) and time-frequency analyses. Moreover, we employed cluster-based permutation for statistical analyses. Naming latencies showed semantic and Stroop-like interference effects. The ERP waveforms for congruent stimuli started diverging statistically from categorically related stimuli around 250 ms. Deflections for the categorically related condition were more negative-going than for the congruent condition (the Stroop-like effect). The time-frequency analysis revealed a power increase in the beta band (12-30 Hz) for categorically related relative to unrelated stimuli roughly between 250 and 370 ms (the semantic effect). The common time window of these effects suggests that both semantic interference and Stroop-like effects emerged during lexical selection. © 2012 Elsevier B.V.

Loading International Max Planck Research School for Language science collaborators
Loading International Max Planck Research School for Language science collaborators