Basque Center on Cognition
Basque Center on Cognition
News Article | May 9, 2017
The way bilingual people read is conditioned by the languages they speak. This is the main conclusion reached by researchers at the Basque Center on Cognition, Brain and Language (BCBL) after reviewing the existing scientific literature and comparing this information to the findings of studies at their own centre. The scientists found that the languages spoken by bilingual people (when they learned to read in two languages at the same time) affect their reading strategies and even the cognitive foundations that form the basis for the capacity to read. This discovery could have implications for clinical and education practice. "Monolingual speakers of transparent languages - where letters are pronounced the same independently of the word they are included in, such as Basque or Spanish - have a greater tendency to use analytical reading strategies, where they read words in parts," Sinc was told by Marie Lallier, one of the authors of the article. On the other hand, speakers of opaque languages, where the sounds of letters differ depending on the word (for example English or French) are more likely to use a global reading strategy. In other words, they tend to read whole words to understand their meaning. Nonetheless, the BCBL researchers have observed that bilingual people who learn to read two languages at the same time do not read the same way as monolingual speakers; rather, they follow a different pattern which had not previously been described. According to the literature review, recently published in Psychonomic Bulletin and Review, a contamination effect takes place between the two reading strategies in speakers of two languages. Therefore, a person learning to read in Spanish and in English will have a greater tendency towards a global strategy, even when reading in Spanish, than a monolingual Spanish speaker. This effect is caused by the influence of the second language. When reading in English, on the contrary, they will tend towards a more analytical strategy (reading by parts) than monolingual English speakers, due to "contagion" from Spanish. "The brains of bilingual people adjusts itself in accordance with what they learn, applying the strategies needed to read in one language to their reading in the other language," Lallier adds. The researchers had previously described the principal strategies used by monolingual speakers of various languages. However, the way reading strategies are modified in bilingual people when they learn to read in two languages had never before been identified. The scientists at the San Sebastian centre believe that learning to read in two languages with different characteristics than the mother tongue also causes a change in the cognitive processes that are the basis of reading acquisition, such as visual attention or auditory phonological processes. In other words, learning to read in an opaque language (such as English or French) reinforces our capacity to rapidly process many visual elements, because whole words must be deciphered to achieve fluent reading in these languages. As transparent languages have a much greater focus on the letter-sound correspondence, learning to read in these languages is though to improve our sensitivity in perceiving the sounds of the language. The paper's authors consider their findings to have implications at different levels. From an educational standpoint, they allow better understanding of how bilingual populations learn to read and what type of strategies are most advisable to help pupils learn based on the languages they know. The new discovery could also help in the diagnosis and assessment of dyslexia and other problems with reading. "Language learning can not provoke more cases of dyslexia, as the disorder is usually caused by neurogenetic factors. Our theory suggests that more language learning could make the symptoms more visible, and vice versa. This depends on the combination of languages they are learning," Lallier explains. The languages a child knows are therefore determinants for identifying potential disorders, as this essential information would explain certain mistakes made when reading. "Our experience with languages modulates our capacity to read. This should be taken into account when teaching bilingual children to read, and if any reading problems, such as dyslexia, should appear. We need to establish specific scales for the diagnosis of dyslexia in bilingual people, because their circumstances are different," the expert concludes. Lallier M, Carreiras M. 'Cross-linguistic transfer in bilinguals reading in two alphabetic orthographies: The grain size accommodation hypothesis'. Psychon Bull Rev 2017 Apr 12. doi: 10.3758/s13423-017-1273-0.
Nieuwland M.S.,Basque Center on Cognition |
Martin A.E.,Basque Center on Cognition
Cognition | Year: 2012
Propositional truth-value can be a defining feature of a sentence's relevance to the unfolding discourse, and establishing propositional truth-value in context can be key to successful interpretation. In the current study, we investigate its role in the comprehension of counterfactual conditionals, which describe imaginary consequences of hypothetical events, and are thought to require keeping in mind both what is true and what is false. Pre-stored real-world knowledge may therefore intrude upon and delay counterfactual comprehension, which is predicted by some accounts of discourse comprehension, and has been observed during online comprehension. The impact of propositional truth-value may thus be delayed in counterfactual conditionals, as also claimed for sentences containing other types of logical operators (e.g., negation, scalar quantifiers). In an event-related potential (ERP) experiment, we investigated the impact of propositional truth-value when described consequences are both true and predictable given the counterfactual premise. False words elicited larger N400 ERPs than true words, in negated counterfactual sentences (e.g., "If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would have been Russia/America") and real-world sentences (e.g., "Because N.A.S.A. developed its Apollo Project, the first country to land on the moon was America/Russia") alike. These indistinguishable N400 effects of propositional truth-value, elicited by opposite word pairs, argue against disruptions by real-world knowledge during counterfactual comprehension, and suggest that incoming words are mapped onto the counterfactual context without any delay. Thus, provided a sufficiently constraining context, propositional truth-value rapidly impacts ongoing semantic processing, be the proposition factual or counterfactual. © 2011 Elsevier B.V.
Molinaro N.,Basque Center on Cognition |
Barraza P.,University of Chile |
Carreiras M.,Basque Center on Cognition |
NeuroImage | Year: 2013
Word reading is heavily influenced by the information provided by previous context. In this study, we analyzed the neurophysiological bases of sentence reading through the EEG activity elicited during reading the same word embedded in differently constraining contexts: a) a low-constraining context; b) a high-constraining semantic compositional context; c) a high-constraining collocational context in which the item was in final position of a multi-word fixed-order expression. Cloze-probability of the two high-constraining contexts was equated. Before reading the target word we observed increased EEG gamma phase synchronization for the high-constraining compositional context and increased EEG theta synchronization for the collocational context (both compared to the low-constraining condition). After reading the target word we observed increased frontal positive EEG evoked activity (~. 220. ms) for the high-constraining compositional context but an even earlier (~. 120. ms) effect for the high-constraining collocational condition that was distributed over the scalp. A positive correlation was found only between the increased theta synchronization and the early EEG effect for the high-constraining collocational condition. Results indicate that long-range frontal-occipital interactions in the theta band - indexing working memory operations - support early visual-orthographic analysis of an incoming stimulus (such as the expected word); gamma-phase synchronization better represents binding operations between feed-forward activation and matching feedback. These data suggest that internal linguistic knowledge stored in long-term memory - if unambiguously pre-activated - supports the low-level perceptual processes involved in reading. © 2013 Elsevier Inc.
Dumay N.,Basque Center on Cognition |
Gareth Gaskell M.,University of York
Cognition | Year: 2012
Two experiments explored the consolidation of spoken words, and assessed whether post-sleep novel competitor effects truly reflect engagement of these novel words in competition for lexical segmentation. Two types of competitor relationships were contrasted: the onset-aligned case (such as "frenzylk"), where the novel word is a close variant of the existing word: they start at the same time point and overlap on most of their segments; and the fully embedding case (such as "lir. mucktoze"), where the existing word corresponds to a smaller embedded portion of its novel competitor and is thus less noticeable. Experiment 1 (pause detection) revealed a similar performance for both cases, with no competitor effect immediately after exposure, but significant inhibition after 24. h and seven days. Experiment 2 (word spotting) produced exactly the same pattern; however, as is the case with existing word carriers (cf. McQueen, Norris, & Cutler, 1994), the inhibition was much stronger for fully embedded than for onset-aligned targets (e.g., "lir. muckt" vs "frenzyl"). Meanwhile, explicit measures of learning, i.e., free recall and recognition, improved over time. These results cannot be explained by either consolidation of episodic traces or acquisition of new phonological/dialectal variants. We argue instead that they reflect a general trait of vocabulary learning and consolidation. © 2011 Elsevier B.V.
Nieuwland M.S.,Basque Center on Cognition
NeuroImage | Year: 2012
What makes a proposition true or false has traditionally played an essential role in philosophical and linguistic theories of meaning. A comprehensive neurobiological theory of language must ultimately be able to explain the combined contributions of real-world truth-value and discourse context to sentence meaning. This fMRI study investigated the neural circuits that are sensitive to the propositional truth-value of sentences about counterfactual worlds, aiming to reveal differential hemispheric sensitivity of the inferior prefrontal gyri to counterfactual truth-value and real-world truth-value. Participants read true or false counterfactual conditional sentences ("If N.A.S.A. had not developed its Apollo Project, the first country to land on the moon would be Russia/America") and real-world sentences ("Because N.A.S.A. developed its Apollo Project, the first country to land on the moon has been America/Russia") that were matched on contextual constraint and truth-value. ROI analyses showed that whereas the left BA 47 showed similar activity increases to counterfactual false sentences and to real-world false sentences (compared to true sentences), the right BA 47 showed a larger increase for counterfactual false sentences. Moreover, whole-brain analyses revealed a distributed neural circuit for dealing with propositional truth-value. These results constitute the first evidence for hemispheric differences in processing counterfactual truth-value and real-world truth-value, and point toward additional right hemisphere involvement in counterfactual comprehension. © 2011 Elsevier Inc.
Baart M.,Basque Center on Cognition |
Baart M.,University of Tilburg |
Stekelenburg J.J.,University of Tilburg |
Vroomen J.,University of Tilburg
Neuropsychologia | Year: 2014
Lip-read speech is integrated with heard speech at various neural levels. Here, we investigated the extent to which lip-read induced modulations of the auditory N1 and P2 (measured with EEG) are indicative of speech-specific audiovisual integration, and we explored to what extent the ERPs were modulated by phonetic audiovisual congruency. In order to disentangle speech-specific (phonetic) integration from non-speech integration, we used Sine-Wave Speech (SWS) that was perceived as speech by half of the participants (they were in speech-mode), while the other half was in non-speech mode. Results showed that the N1 obtained with audiovisual stimuli peaked earlier than the N1 evoked by auditory-only stimuli. This lip-read induced speeding up of the N1 occurred for listeners in speech and non-speech mode. In contrast, if listeners were in speech-mode, lip-read speech also modulated the auditory P2, but not if listeners were in non-speech mode, thus revealing speech-specific audiovisual binding. Comparing ERPs for phonetically congruent audiovisual stimuli with ERPs for incongruent stimuli revealed an effect of phonetic stimulus congruency that started at ~200. ms after (in)congruence became apparent. Critically, akin to the P2 suppression, congruency effects were only observed if listeners were in speech mode, and not if they were in non-speech mode. Using identical stimuli, we thus confirm that audiovisual binding involves (partially) different neural mechanisms for sound processing in speech and non-speech mode. © 2013 Elsevier Ltd.
Martin A.E.,Basque Center on Cognition |
McElree B.,New York University
Journal of Memory and Language | Year: 2011
Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ (Martin & McElree, 2008, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb-phrase ellipsis. We present speed-accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables. © 2011 Elsevier Inc.
Nieuwland M.S.,University of Edinburgh |
Nieuwland M.S.,Basque Center on Cognition
Journal of Memory and Language | Year: 2013
People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., " If dogs had gills..." ). Experiment 1 established that without this premise, described consequences (e.g., " Dobermans would breathe under water ..." ) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals. © 2012 Elsevier Inc.
News Article | October 26, 2016
Three Spanish researchers have discovered that bilinguals use different neural networks to read languages that are pronounced as they are written - such as the Basque language - from those in which this correspondence does not exist, like English. The results are valuable for teaching reading to adults and children. A group of experts from the Basque Center on Cognition, Brain and Language (BCBL), in San Sebastián, has shown that different networks are activated in the brains of bilingual people when they read opaque languages - those in which words are written one way and pronounced another, like English - and transparent languages - those pronounced as they are written, such as Basque or Spanish. Neuroscience has successfully identified two neural networks that are activated during reading. One is the dorsal stream, involved in phonology (responsible for the sounds of letters), and the other is the ventral stream, which has an orthographic-semantic function (to decipher letters and their meaning). Until now it was thought that these networks were activated differently in monolinguals, depending on the demand on phonological or orthographic-semantic processes. The novelty of the study, published in Cerebral Cortex, is that it demonstrates the importance of a language's orthography (opaque or transparent) in the activation of these neural networks in bilinguals. These results can have an impact on teaching and learning how to read for adults and children. As Myriam Oliver, Manuel Carreiras and Kepa Paz-Alonso, the three authors from the BCBL, tell SINC: "No study had ever so specifically demonstrated this effect in such a clear way." For this purpose, the researchers analysed 36 perfectly bilingual people, either speaking Spanish and Basque, or Spanish and English. They were pure bilinguals, who did not speak any third languages or had only very basic knowledge of them, and learnt the second language later than the first, from the age of 6. Using magnetic resonance, they observed the behaviour of their brains when reading in one language and the other. "When reading in their first language (Spanish), their brain networks behave similarly. However, when reading in the second (English or Basque), we notice that different networks are activated according to the orthography of the language," the three scientists explain. While in Basque-speaking bilinguals the dorsal stream, related to phonology, is activated more, in English-speaking bilinguals the ventral stream, responsible for meaning, works harder. "We know very little about the brain processes involved in bilinguals' reading," say the authors, who uphold that the results are down to the quality of both languages. "Studying this field more in depth is important because it has general, basic implications for learning processes." Basque, a language that is read as it is written, enables readers to make a clear connection between grapheme and phoneme. English, on the other hand, requires a more complex process in which the reader has to take into account certain rules of pronunciation. In a more and more bilingual world, the researchers stress the importance of analysing the best means to learn a language depending on whether it is opaque or transparent. "Knowing this, school curricula can be designed which, depending on the language, emphasise some aspects or others in order to help children to learn to read," they conclude. Myriam Oliver, Manuel Carreiras y Pedro M. Paz-Alonso. 'Functional Dynamics of Dorsal and Ventral Reading Networks in Bilinguals'. Cereb. Cortex (2016) doi: 10.1093/cercor/bhw310 On line 6 de octubre de 2016.
News Article | August 22, 2016
After prepping for the day’s cases, “Mike Brennan,” a 63-year-old cardiology technician, sat down for his morning coffee and paper. On the front page, he discovered something troubling: he could no longer read. No matter how long he stared at a word, its meaning was lost on him. With a history of smoking and hypertension, he worried that he might have had a stroke. So, leaving his coffee, he walked himself down the hall to the emergency department, where neurologists performed a battery of tests to tease out what had happened. Mike still recognized individual letters and, with great difficulty, could sound out small words. But even some simple vocabulary presented problems, for example, he read “desk” as “dish” or “flame” as “thame.” Function words such as prepositions and pronouns gave him particular trouble. Mike couldn’t read, but there was nothing wrong with his eyes. Words heard were no problem. He could recognize colors, faces, and objects. He could speak, move, think and even write normally. Mike had “pure alexia,” meaning he could not read but showed no other impairments. An M.R.I. scan of Mike’s brain revealed a pea-sized stroke in his left inferior occipitotemporal cortex, a region on the brain’s surface just behind the left ear. Mike’s doctors called in neurologist Peter Turkeltaub, director of Georgetown University’s Cognitive Recovery Laboratory, who recognized this was the first case of such a small stroke causing pure alexia. “Pure alexia is a classic neurological syndrome that has been described for well over 100 years,” Turkeltaub wrote me. He had seen many patients with the condition in his career. But, he noted, “This particular case was unusual only because the alexia was caused by a very small stroke,” and the damage was in a location linked to a decades-old debate surrounding the neuroscience of language. The Case for the Visual Word Form Area Mike was not my patient, but his stroke, which occurred in 2010, was of such significance to our understanding of how the brain processes the written word that I was determined to track down the details of his case. Studies in the early 1990s had identified activity in the left inferior occipitotemporal cortex—the area damaged in Mike’s stroke—when someone saw letters that spelled out meaningful words. Thus scientists wondered whether the region was involved in recognizing words and letters from the contrast lines and curves detected by the eye. They called this location the “visual word form area.” These early findings were exciting, puzzling, and contentious—the kind that, at a conference, provokes shouts and screams from otherwise calm and collected scientists. Such studies were also very new, recalled neuroscientist Peter T. Fox, a pioneer of such research on language. As a result, many other researchers were dubious of the work as a whole. “I remember getting grants rejected with comments like ‘central brain activation can teach us nothing about language—nothing, it has no role in language studies’,” Fox said. Neuroscientists have two approaches for assessing the role of a particular brain area. Lesion-deficit studies were first performed by ancient Roman physicians, who noticed that damage to regions of the brain caused specific behavioral deficits (e.g. a gladiator gets hit on the right side of their head—the lesion—and loses the ability to move his left arm—the deficit). Activation studies using neuroimaging work the other way: you record where the brain is active while performing a particular task, like reading. (In this sense, Mike’s story, involving both damage and imaging, sits at the intersection of the classical and the modern.) Each method has its strengths and weaknesses. “Activation studies show that a particular brain region is engaged during a task, but it doesn’t show that it’s a necessary and sufficient condition for that task,” Fox explained. “And yet, because the brain has redundant wiring, it’s nearly impossible to claim that a stroke in a particular area, like the visual word form area, is necessary and sufficient to eliminate a particular function.” So even though multiple activation studies reported the visual word form area existed, the behavioral neurologists weren’t convinced. And there were several reasons to think such an area was “a myth,” as one skeptical 2003 paper put it. For one thing, the very search for a “visual word form area” was misguided because it personified the brain’s real work, which is to process and decode visual information. Brain regions act as an assembly line of neural groups that each contribute some cognitive rivet or weld to a larger percept. A visual word form area confused an assembly line for a one-man-band. Finally there was the problem of evolution. Because reading was a relatively new cultural invention, humans couldn’t have evolved to read text in the same way that mammals evolved to recognize faces—there simply hadn’t been enough time. This made it hard to believe in a brain structure expressly devoted to reading. A decade before Mike’s stroke, Turkeltaub had shown that a child’s brain shifts where and how it processes text as he or she learns to read. But because children are also learning to walk, talk, write, and otherwise be human, it was hard to say what brain changes were caused by literacy alone. It took a series of studies with adults learning to read for the first time, to nail down the truth about the visual word form area. At the turn of the 21st century, a large group of Colombian guerrillas abandoned their weapons and, after decades of fighting, rejoined mainstream society. Without formal education, many of the fighters learned to read for the first time as 20-somethings. Camouflaged within this Colombian drama, a group of neuroscientists led by Manuel Carrerias, Scientific Director of Spain’s Basque Center on Cognition, Brain and Language, saw an opportunity to study how learning to read changes the adult brain. Working with Catherine Price, a neuroscientist at University College London, Carrerias used M.R.I. to track changes in the brain as the former guerrillas learned to read. They discovered that learning to read was associated with enlarged grey matter in specific brain areas and further that brain activity within these regions became more tightly coordinated with improved literacy, showing that structural and functional changes occurred simultaneously. Much as high commuter traffic promotes asphalting and widening of thoroughfares to improve traffic flow, brain activity associated with learning how to read promotes the strengthening of specific neural highways, allowing specialized neural centers to more efficiently perform the cognitive task of reading. Carrerias and Price’s findings, published in 2009, fleshed out the brain’s reading networks but didn’t find any evidence supporting the role of a visual word form area. The next major advance in the case for the visual word form area came from Stanislas Dehaene, a neuroscientist at the French Institute of Health and Medical Research. In 2010 Dehaene proposed that reading networks build on evolutionarily older functions at the expense of those functions. To test his hypothesis, Dehaene gathered illiterate adults, people who learned to read in adulthood, and literate adults schooled in childhood. Comparing these groups, Dehaene reported that the more literate a person was, the less responsive the visual word form area became towards other visual stimuli—in particular to faces—and more focused it became towards writing. In poor readers, meanwhile, the visual word form area responded relatively indiscriminately to words, faces, shapes, and checkerboards. As Dehaene predicted, when one learns to read, the left inferior occipitotemporal cortex is recycled from a general visual recognition center to a specialized word recognition center, at the expense of other tasks. Mike’s stroke served as the final piece of the puzzle: a case of pure alexia caused by damage in only the visual word form area. As with most scientific knowledge, the exact role of the visual word form area is still unresolved. However, it is no longer a myth. And the way in which the area is repurposed from general visual recognition to word specialist is a reminder of how powerfully the brain can retool and adapt—essential processes both in learning and healing. Thanks to that brain plasticity, Mike recovered quickly. Guided by Whitney Postman, a speech language therapist and an assistant professor at St. Louis University whose post-doctoral work at the NIH focused on post-stroke aphasia, Mike began tailored therapy to help him overcome his difficulty with function words. She fashioned a “brute force approach” in which he viewed, copied, said, and spelled lists of vocabulary in various contexts. “It’s pretty much the way we learn to use these function words as children,” Postman said. For his part, Mike was a highly motivated pupil. He and Postman had only two sessions. One month after his stroke, his letter-by-letter reading was greatly improved. After three months, he was reading patient charts and lab values at work—and his morning paper.