Basque Center on Cognition

Donostia / San Sebastián, Spain

Basque Center on Cognition

Donostia / San Sebastián, Spain

Time filter

Source Type

News Article | May 9, 2017
Site: www.eurekalert.org

The way bilingual people read is conditioned by the languages they speak. This is the main conclusion reached by researchers at the Basque Center on Cognition, Brain and Language (BCBL) after reviewing the existing scientific literature and comparing this information to the findings of studies at their own centre. The scientists found that the languages spoken by bilingual people (when they learned to read in two languages at the same time) affect their reading strategies and even the cognitive foundations that form the basis for the capacity to read. This discovery could have implications for clinical and education practice. "Monolingual speakers of transparent languages - where letters are pronounced the same independently of the word they are included in, such as Basque or Spanish - have a greater tendency to use analytical reading strategies, where they read words in parts," Sinc was told by Marie Lallier, one of the authors of the article. On the other hand, speakers of opaque languages, where the sounds of letters differ depending on the word (for example English or French) are more likely to use a global reading strategy. In other words, they tend to read whole words to understand their meaning. Nonetheless, the BCBL researchers have observed that bilingual people who learn to read two languages at the same time do not read the same way as monolingual speakers; rather, they follow a different pattern which had not previously been described. According to the literature review, recently published in Psychonomic Bulletin and Review, a contamination effect takes place between the two reading strategies in speakers of two languages. Therefore, a person learning to read in Spanish and in English will have a greater tendency towards a global strategy, even when reading in Spanish, than a monolingual Spanish speaker. This effect is caused by the influence of the second language. When reading in English, on the contrary, they will tend towards a more analytical strategy (reading by parts) than monolingual English speakers, due to "contagion" from Spanish. "The brains of bilingual people adjusts itself in accordance with what they learn, applying the strategies needed to read in one language to their reading in the other language," Lallier adds. The researchers had previously described the principal strategies used by monolingual speakers of various languages. However, the way reading strategies are modified in bilingual people when they learn to read in two languages had never before been identified. The scientists at the San Sebastian centre believe that learning to read in two languages with different characteristics than the mother tongue also causes a change in the cognitive processes that are the basis of reading acquisition, such as visual attention or auditory phonological processes. In other words, learning to read in an opaque language (such as English or French) reinforces our capacity to rapidly process many visual elements, because whole words must be deciphered to achieve fluent reading in these languages. As transparent languages have a much greater focus on the letter-sound correspondence, learning to read in these languages is though to improve our sensitivity in perceiving the sounds of the language. The paper's authors consider their findings to have implications at different levels. From an educational standpoint, they allow better understanding of how bilingual populations learn to read and what type of strategies are most advisable to help pupils learn based on the languages they know. The new discovery could also help in the diagnosis and assessment of dyslexia and other problems with reading. "Language learning can not provoke more cases of dyslexia, as the disorder is usually caused by neurogenetic factors. Our theory suggests that more language learning could make the symptoms more visible, and vice versa. This depends on the combination of languages they are learning," Lallier explains. The languages a child knows are therefore determinants for identifying potential disorders, as this essential information would explain certain mistakes made when reading. "Our experience with languages modulates our capacity to read. This should be taken into account when teaching bilingual children to read, and if any reading problems, such as dyslexia, should appear. We need to establish specific scales for the diagnosis of dyslexia in bilingual people, because their circumstances are different," the expert concludes. Lallier M, Carreiras M. 'Cross-linguistic transfer in bilinguals reading in two alphabetic orthographies: The grain size accommodation hypothesis'. Psychon Bull Rev 2017 Apr 12. doi: 10.3758/s13423-017-1273-0.


Molnar M.,Basque Center on Cognition | Gervain J.,University of Paris Descartes | Carreiras M.,Basque Center on Cognition | Carreiras M.,University of the Basque Country
Infancy | Year: 2014

Language rhythm determines young infants' language discrimination abilities. However, it is unclear whether young bilingual infants exposed to rhythmically similar languages develop sensitivities to cross-linguistic rhythm cues to discriminate their dual language input. To address this question, 3.5-month-old monolingual Basque, monolingual Spanish and bilingual Basque-Spanish infants' language discrimination abilities (across low-pass filtered speech samples of Basque and Spanish) have been tested using the visual habituation procedure. Although falling within the same rhythmic class, Basque and Spanish exhibit significant differences in their distributions of vocalic intervals (within-rhythmic class variation). All infant groups in our study successfully discriminated between the languages, although each group exhibited a different pattern. Monolingual Spanish infants succeeded only when they heard Basque during habituation, suggesting that they were influenced by native language recognition. The bilingual and the Basque monolingual infants showed no such asymmetries and succeeded irrespective of the language of habituation. Additionally, bilingual infants exhibited longer looking times in the test phase as compared with monolinguals, reflecting that bilingual infants attend to their native languages differently than monolinguals. Overall, results suggest that bilingual infants are sensitive to within-rhythm acoustic regularities of their native language(s) facilitating language discrimination and hence supporting early bilingual acquisition. © International Society on Infant Studies (ISIS).


Molinaro N.,Basque Center on Cognition | Barber H.A.,University of La Laguna | Carreiras M.,Basque Center on Cognition | Carreiras M.,Ikerbasque
Cortex | Year: 2011

In the domain of written sentence comprehension, the computation of agreement dependencies is generally considered as a form-driven processing routine whose domain is syntactic in nature. In the present review we discuss the main findings emerging in the Event-Related Potential (ERP) literature on sentence comprehension, focusing on the different dimensions of agreement patterns (features, values, constituents involved and language): Agreement mismatches usually evoke a biphasic electrophysiological pattern (Left Anterior Negativity - LAN, 300-450. msec and P600 after 500. msec). This ERP pattern is assumed to reflect rule-based computations sensitive to formal (inflectional) covariations of related words (trigger-target). Here we claim that agreement processing is sensitive to both the type of feature involved and the constituents that express the agreement dependency. More specifically, LAN could reflect violation of expectancy (elicited by the trigger) for the target functional morphology; later, trigger and target are structurally integrated at the sentence level (earlyP600). However, morphosyntactic information could trigger the activation of higher-level representations that are not strictly syntactic in nature. The recruitment of this additional non-syntactic information (mirrored by N400-like effects) indicates that rule-based computations of agreement dependencies are not blind to non-syntactic information but are often recruited to establish sentence-level relations. © 2011 Elsevier Srl.


Laszlo S.,Binghamton University State University of New York | Armstrong B.C.,Basque Center on Cognition
Brain and Language | Year: 2014

The Parallel Distributed Processing (PDP) framework is built on neural-style computation, and is thus well-suited for simulating the neural implementation of cognition. However, relatively little cognitive modeling work has concerned neural measures, instead focusing on behavior. Here, we extend a PDP model of reading-related components in the Event-Related Potential (ERP) to simulation of the N400 repetition effect. We accomplish this by incorporating the dynamics of cortical post-synaptic potentials-the source of the ERP signal-into the model. Simulations demonstrate that application of these dynamics is critical for model elicitation of repetition effects in the time and frequency domains. We conclude that by advancing a neurocomputational understanding of repetition effects, we are able to posit an interpretation of their source that is both explicitly specified and mechanistically different from the well-accepted cognitive one. © 2014 Elsevier Inc.


Diependaele K.,Ghent University | Dunabeitia J.A.,Basque Center on Cognition | Morris J.,Hampshire College | Keuleers E.,Ghent University
Journal of Memory and Language | Year: 2011

In three experiments we compared the performance of native English speakers to that of Spanish-English and Dutch-English bilinguals on a masked morphological priming lexical decision task. The results do not show significant differences across the three experiments. In line with recent meta-analyses, we observed a graded pattern of facilitation across stem priming with transparent suffixed primes (e.g., viewer-view), opaque suffixed or pseudo-suffixed primes (e.g., corner- corn) and form control primes (e.g., freeze- free). Priming was largest in the transparent condition, smallest in the form condition and intermediate in the opaque condition. Our data confirm the hypothesis that bilinguals largely adopt the same processing strategies as native speakers (e.g., Lemhöfer et al., 2008), and constrain the hypothesis that bilinguals rely more heavily on whole-word processing in their second language (Clahsen, Felser, Neubauer, Sato, & Silva, 2010; Ullman, 2004, 2005). The observed pattern of morphological priming is in line with earlier monolingual studies, further highlighting the reality of semantic transparency effects in the initial stages of word recognition. © 2011 Elsevier Inc.


Martin A.E.,Basque Center on Cognition | McElree B.,New York University
Journal of Memory and Language | Year: 2011

Language comprehension requires recovering meaning from linguistic form, even when the mapping between the two is indirect. A canonical example is ellipsis, the omission of information that is subsequently understood without being overtly pronounced. Comprehension of ellipsis requires retrieval of an antecedent from memory, without prior prediction, a property which enables the study of retrieval in situ (Martin & McElree, 2008, 2009). Sluicing, or inflectional-phrase ellipsis, in the presence of a conjunction, presents a test case where a competing antecedent position is syntactically licensed, in contrast with most cases of nonadjacent dependency, including verb-phrase ellipsis. We present speed-accuracy tradeoff and eye-movement data inconsistent with the hypothesis that retrieval is accomplished via a syntactically guided search, a particular variant of search not examined in past research. The observed timecourse profiles are consistent with the hypothesis that antecedents are retrieved via a cue-dependent direct-access mechanism susceptible to general memory variables. © 2011 Elsevier Inc.


Nieuwland M.S.,University of Edinburgh | Nieuwland M.S.,Basque Center on Cognition
Journal of Memory and Language | Year: 2013

People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., " If dogs had gills..." ). Experiment 1 established that without this premise, described consequences (e.g., " Dobermans would breathe under water ..." ) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals. © 2012 Elsevier Inc.


Garcia-Penton L.,Basque Center on Cognition | Perez Fernandez A.,Basque Center on Cognition | Iturria-Medina Y.,Cuban Neuroscience Center | Gillon-Dowens M.,The Interdisciplinary Center | And 2 more authors.
NeuroImage | Year: 2014

How the brain deals with more than one language and whether we need different or extra brain language sub-networks to support more than one language remain unanswered questions. Here, we investigate structural brain network differences between early bilinguals and monolinguals. Using diffusion-weighted MRI (DW-MRI) tractography techniques and a network-based statistic (NBS) procedure, we found two structural sub-networks more connected by white matter (WM) tracts in bilinguals than in monolinguals; confirming WM brain plasticity in bilinguals. One of these sub-networks comprises left frontal and parietal/temporal regions, while the other comprises left occipital and parietal/temporal regions and also the right superior frontal gyrus. Most of these regions have been related to language processing and monitoring; suggesting that bilinguals develop specialized language sub-networks to deal with the two languages. Additionally, a complex network analysis showed that these sub-networks are more graph-efficient in bilinguals than monolinguals and this increase seems to be at the expense of a whole-network graph-efficiency decrease. © 2013 Elsevier Inc.


Three Spanish researchers have discovered that bilinguals use different neural networks to read languages that are pronounced as they are written - such as the Basque language - from those in which this correspondence does not exist, like English. The results are valuable for teaching reading to adults and children. A group of experts from the Basque Center on Cognition, Brain and Language (BCBL), in San Sebastián, has shown that different networks are activated in the brains of bilingual people when they read opaque languages - those in which words are written one way and pronounced another, like English - and transparent languages - those pronounced as they are written, such as Basque or Spanish. Neuroscience has successfully identified two neural networks that are activated during reading. One is the dorsal stream, involved in phonology (responsible for the sounds of letters), and the other is the ventral stream, which has an orthographic-semantic function (to decipher letters and their meaning). Until now it was thought that these networks were activated differently in monolinguals, depending on the demand on phonological or orthographic-semantic processes. The novelty of the study, published in Cerebral Cortex, is that it demonstrates the importance of a language's orthography (opaque or transparent) in the activation of these neural networks in bilinguals. These results can have an impact on teaching and learning how to read for adults and children. As Myriam Oliver, Manuel Carreiras and Kepa Paz-Alonso, the three authors from the BCBL, tell SINC: "No study had ever so specifically demonstrated this effect in such a clear way." For this purpose, the researchers analysed 36 perfectly bilingual people, either speaking Spanish and Basque, or Spanish and English. They were pure bilinguals, who did not speak any third languages or had only very basic knowledge of them, and learnt the second language later than the first, from the age of 6. Using magnetic resonance, they observed the behaviour of their brains when reading in one language and the other. "When reading in their first language (Spanish), their brain networks behave similarly. However, when reading in the second (English or Basque), we notice that different networks are activated according to the orthography of the language," the three scientists explain. While in Basque-speaking bilinguals the dorsal stream, related to phonology, is activated more, in English-speaking bilinguals the ventral stream, responsible for meaning, works harder. "We know very little about the brain processes involved in bilinguals' reading," say the authors, who uphold that the results are down to the quality of both languages. "Studying this field more in depth is important because it has general, basic implications for learning processes." Basque, a language that is read as it is written, enables readers to make a clear connection between grapheme and phoneme. English, on the other hand, requires a more complex process in which the reader has to take into account certain rules of pronunciation. In a more and more bilingual world, the researchers stress the importance of analysing the best means to learn a language depending on whether it is opaque or transparent. "Knowing this, school curricula can be designed which, depending on the language, emphasise some aspects or others in order to help children to learn to read," they conclude. Myriam Oliver, Manuel Carreiras y Pedro M. Paz-Alonso. 'Functional Dynamics of Dorsal and Ventral Reading Networks in Bilinguals'. Cereb. Cortex (2016) doi: 10.1093/cercor/bhw310 On line 6 de octubre de 2016.


News Article | August 22, 2016
Site: www.scientificamerican.com

After prepping for the day’s cases, “Mike Brennan,” a 63-year-old cardiology technician, sat down for his morning coffee and paper. On the front page, he discovered something troubling: he could no longer read. No matter how long he stared at a word, its meaning was lost on him. With a history of smoking and hypertension, he worried that he might have had a stroke. So, leaving his coffee, he walked himself down the hall to the emergency department, where neurologists performed a battery of tests to tease out what had happened. Mike still recognized individual letters and, with great difficulty, could sound out small words.  But even some simple vocabulary presented problems, for example, he read “desk” as “dish” or “flame” as “thame.” Function words such as prepositions and pronouns gave him particular trouble. Mike couldn’t read, but there was nothing wrong with his eyes. Words heard were no problem. He could recognize colors, faces, and objects. He could speak, move, think and even write normally. Mike had “pure alexia,” meaning he could not read but showed no other impairments. An M.R.I. scan of Mike’s brain revealed a pea-sized stroke in his left inferior occipitotemporal cortex, a region on the brain’s surface just behind the left ear. Mike’s doctors called in neurologist Peter Turkeltaub, director of Georgetown University’s Cognitive Recovery Laboratory, who recognized this was the first case of such a small stroke causing pure alexia. “Pure alexia is a classic neurological syndrome that has been described for well over 100 years,” Turkeltaub wrote me. He had seen many patients with the condition in his career. But, he noted, “This particular case was unusual only because the alexia was caused by a very small stroke,” and the damage was in a location linked to a decades-old debate surrounding the neuroscience of language. The Case for the Visual Word Form Area Mike was not my patient, but his stroke, which occurred in 2010, was of such significance to our understanding of how the brain processes the written word that I was determined to track down the details of his case. Studies in the early 1990s had identified activity in the left inferior occipitotemporal cortex—the area damaged in Mike’s stroke—when someone saw letters that spelled out meaningful words. Thus scientists wondered whether the region was involved in recognizing words and letters from the contrast lines and curves detected by the eye. They called this location the “visual word form area.” These early findings were exciting, puzzling, and contentious—the kind that, at a conference, provokes shouts and screams from otherwise calm and collected scientists. Such studies were also very new, recalled neuroscientist Peter T. Fox, a pioneer of such research on language. As a result, many other researchers were dubious of the work as a whole. “I remember getting grants rejected with comments like ‘central brain activation can teach us nothing about language—nothing, it has no role in language studies’,” Fox said. Neuroscientists have two approaches for assessing the role of a particular brain area. Lesion-deficit studies were first performed by ancient Roman physicians, who noticed that damage to regions of the brain caused specific behavioral deficits (e.g. a gladiator gets hit on the right side of their head—the lesion—and loses the ability to move his left arm—the deficit). Activation studies using neuroimaging work the other way: you record where the brain is active while performing a particular task, like reading. (In this sense, Mike’s story, involving both damage and imaging, sits at the intersection of the classical and the modern.) Each method has its strengths and weaknesses. “Activation studies show that a particular brain region is engaged during a task, but it doesn’t show that it’s a necessary and sufficient condition for that task,” Fox explained. “And yet, because the brain has redundant wiring, it’s nearly impossible to claim that a stroke in a particular area, like the visual word form area, is necessary and sufficient to eliminate a particular function.” So even though multiple activation studies reported the visual word form area existed, the behavioral neurologists weren’t convinced. And there were several reasons to think such an area was “a myth,” as one skeptical 2003 paper put it. For one thing, the very search for a “visual word form area” was misguided because it personified the brain’s real work, which is to process and decode visual information. Brain regions act as an assembly line of neural groups that each contribute some cognitive rivet or weld to a larger percept. A visual word form area confused an assembly line for a one-man-band. Finally there was the problem of evolution. Because reading was a relatively new cultural invention, humans couldn’t have evolved to read text in the same way that mammals evolved to recognize faces—there simply hadn’t been enough time. This made it hard to believe in a brain structure expressly devoted to reading. A decade before Mike’s stroke, Turkeltaub had shown that a child’s brain shifts where and how it processes text as he or she learns to read. But because children are also learning to walk, talk, write, and otherwise be human, it was hard to say what brain changes were caused by literacy alone. It took a series of studies with adults learning to read for the first time, to nail down the truth about the visual word form area. At the turn of the 21st century, a large group of Colombian guerrillas abandoned their weapons and, after decades of fighting, rejoined mainstream society. Without formal education, many of the fighters learned to read for the first time as 20-somethings. Camouflaged within this Colombian drama, a group of neuroscientists led by Manuel Carrerias, Scientific Director of Spain’s Basque Center on Cognition, Brain and Language, saw an opportunity to study how learning to read changes the adult brain. Working with Catherine Price, a neuroscientist at University College London, Carrerias used M.R.I. to track changes in the brain as the former guerrillas learned to read. They discovered that learning to read was associated with enlarged grey matter in specific brain areas and further that brain activity within these regions became more tightly coordinated with improved literacy, showing that structural and functional changes occurred simultaneously. Much as high commuter traffic promotes asphalting and widening of thoroughfares to improve traffic flow, brain activity associated with learning how to read promotes the strengthening of specific neural highways, allowing specialized neural centers to more efficiently perform the cognitive task of reading. Carrerias and Price’s findings, published in 2009, fleshed out the brain’s reading networks but didn’t find any evidence supporting the role of a visual word form area. The next major advance in the case for the visual word form area came from Stanislas Dehaene, a neuroscientist at the French Institute of Health and Medical Research. In 2010 Dehaene proposed that reading networks build on evolutionarily older functions at the expense of those functions. To test his hypothesis, Dehaene gathered illiterate adults, people who learned to read in adulthood, and literate adults schooled in childhood. Comparing these groups, Dehaene reported that the more literate a person was, the less responsive the visual word form area became towards other visual stimuli—in particular to faces—and more focused it became towards writing. In poor readers, meanwhile, the visual word form area responded relatively indiscriminately to words, faces, shapes, and checkerboards. As Dehaene predicted, when one learns to read, the left inferior occipitotemporal cortex is recycled from a general visual recognition center to a specialized word recognition center, at the expense of other tasks. Mike’s stroke served as the final piece of the puzzle: a case of pure alexia caused by damage in only the visual word form area. As with most scientific knowledge, the exact role of the visual word form area is still unresolved. However, it is no longer a myth. And the way in which the area is repurposed from general visual recognition to word specialist is a reminder of how powerfully the brain can retool and adapt—essential processes both in learning and healing. Thanks to that brain plasticity, Mike recovered quickly. Guided by Whitney Postman, a speech language therapist and an assistant professor at St. Louis University whose post-doctoral work at the NIH focused on post-stroke aphasia, Mike began tailored therapy to help him overcome his difficulty with function words. She fashioned a “brute force approach” in which he viewed, copied, said, and spelled lists of vocabulary in various contexts. “It’s pretty much the way we learn to use these function words as children,” Postman said. For his part, Mike was a highly motivated pupil. He and Postman had only two sessions. One month after his stroke, his letter-by-letter reading was greatly improved. After three months, he was reading patient charts and lab values at work—and his morning paper.

Loading Basque Center on Cognition collaborators
Loading Basque Center on Cognition collaborators