Entity

Time filter

Source Type

Donostia / San Sebastián, Spain

Diependaele K.,Ghent University | Dunabeitia J.A.,Basque Center on Cognition | Morris J.,Hampshire College | Keuleers E.,Ghent University
Journal of Memory and Language | Year: 2011

In three experiments we compared the performance of native English speakers to that of Spanish-English and Dutch-English bilinguals on a masked morphological priming lexical decision task. The results do not show significant differences across the three experiments. In line with recent meta-analyses, we observed a graded pattern of facilitation across stem priming with transparent suffixed primes (e.g., viewer-view), opaque suffixed or pseudo-suffixed primes (e.g., corner- corn) and form control primes (e.g., freeze- free). Priming was largest in the transparent condition, smallest in the form condition and intermediate in the opaque condition. Our data confirm the hypothesis that bilinguals largely adopt the same processing strategies as native speakers (e.g., Lemhöfer et al., 2008), and constrain the hypothesis that bilinguals rely more heavily on whole-word processing in their second language (Clahsen, Felser, Neubauer, Sato, & Silva, 2010; Ullman, 2004, 2005). The observed pattern of morphological priming is in line with earlier monolingual studies, further highlighting the reality of semantic transparency effects in the initial stages of word recognition. © 2011 Elsevier Inc. Source


Laszlo S.,Binghamton University State University of New York | Armstrong B.C.,Basque Center on Cognition
Brain and Language | Year: 2014

The Parallel Distributed Processing (PDP) framework is built on neural-style computation, and is thus well-suited for simulating the neural implementation of cognition. However, relatively little cognitive modeling work has concerned neural measures, instead focusing on behavior. Here, we extend a PDP model of reading-related components in the Event-Related Potential (ERP) to simulation of the N400 repetition effect. We accomplish this by incorporating the dynamics of cortical post-synaptic potentials-the source of the ERP signal-into the model. Simulations demonstrate that application of these dynamics is critical for model elicitation of repetition effects in the time and frequency domains. We conclude that by advancing a neurocomputational understanding of repetition effects, we are able to posit an interpretation of their source that is both explicitly specified and mechanistically different from the well-accepted cognitive one. © 2014 Elsevier Inc. Source


Foucart A.,University Pompeu Fabra | Martin C.D.,Basque Center on Cognition | Martin C.D.,Ikerbasque | Moreno E.M.,Complutense University of Madrid | Costa A.,University Pompeu Fabra
Journal of Experimental Psychology: Learning Memory and Cognition | Year: 2014

Why is it more difficult to comprehend a 2nd (L2) than a 1st language (L1)? In the present article we investigate whether difficulties during L2 sentence comprehension come from differences in the way L1 and L2 speakers anticipate upcoming words. We recorded the brain activity (event-related potentials) of Spanish monolinguals, French-Spanish late bilinguals, and Spanish-Catalan early bilinguals while reading sentences in Spanish. We manipulated the ending of highly constrained sentences so that the critical noun was either expected or not. The expected and unexpected nouns were of different gender so that we could observe potential anticipation effects already on the article. In line with previous studies, a modulation of the N400 effect was observed on the article and the noun, followed by an anterior positivity on the noun. Importantly, this pattern was found in all 3 groups, suggesting that, at least when their 2 languages are closely related, bilinguals are able to anticipate upcoming words in a similar manner as monolinguals. © The American Psychological Association. Source


Nieuwland M.S.,University of Edinburgh | Nieuwland M.S.,Basque Center on Cognition
Journal of Memory and Language | Year: 2013

People can establish whether a sentence is hypothetically true even if what it describes can never be literally true given the laws of the natural world. Two event-related potential (ERP) experiments examined electrophysiological responses to sentences about unrealistic counterfactual worlds that require people to construct novel conceptual combinations and infer their consequences as the sentence unfolds in time (e.g., " If dogs had gills..." ). Experiment 1 established that without this premise, described consequences (e.g., " Dobermans would breathe under water ..." ) elicited larger N400 responses than real-world true sentences. Incorporation of the counterfactual premise in Experiment 2 generated similar N400 effects of propositional truth-value in counterfactual and real-world sentences, suggesting that the counterfactual context eliminated the interpretive problems posed by locally anomalous sentences. This result did not depend on cloze probability of the sentences. In contrast to earlier findings regarding online comprehension of logical operators and counterfactuals, these results show that ongoing processing can be directly impacted by propositional truth-value, even that of unrealistic counterfactuals. © 2012 Elsevier Inc. Source


News Article
Site: http://www.scientificamerican.com

After prepping for the day’s cases, “Mike Brennan,” a 63-year-old cardiology technician, sat down for his morning coffee and paper. On the front page, he discovered something troubling: he could no longer read. No matter how long he stared at a word, its meaning was lost on him. With a history of smoking and hypertension, he worried that he might have had a stroke. So, leaving his coffee, he walked himself down the hall to the emergency department, where neurologists performed a battery of tests to tease out what had happened. Mike still recognized individual letters and, with great difficulty, could sound out small words.  But even some simple vocabulary presented problems, for example, he read “desk” as “dish” or “flame” as “thame.” Function words such as prepositions and pronouns gave him particular trouble. Mike couldn’t read, but there was nothing wrong with his eyes. Words heard were no problem. He could recognize colors, faces, and objects. He could speak, move, think and even write normally. Mike had “pure alexia,” meaning he could not read but showed no other impairments. An M.R.I. scan of Mike’s brain revealed a pea-sized stroke in his left inferior occipitotemporal cortex, a region on the brain’s surface just behind the left ear. Mike’s doctors called in neurologist Peter Turkeltaub, director of Georgetown University’s Cognitive Recovery Laboratory, who recognized this was the first case of such a small stroke causing pure alexia. “Pure alexia is a classic neurological syndrome that has been described for well over 100 years,” Turkeltaub wrote me. He had seen many patients with the condition in his career. But, he noted, “This particular case was unusual only because the alexia was caused by a very small stroke,” and the damage was in a location linked to a decades-old debate surrounding the neuroscience of language. The Case for the Visual Word Form Area Mike was not my patient, but his stroke, which occurred in 2010, was of such significance to our understanding of how the brain processes the written word that I was determined to track down the details of his case. Studies in the early 1990s had identified activity in the left inferior occipitotemporal cortex—the area damaged in Mike’s stroke—when someone saw letters that spelled out meaningful words. Thus scientists wondered whether the region was involved in recognizing words and letters from the contrast lines and curves detected by the eye. They called this location the “visual word form area.” These early findings were exciting, puzzling, and contentious—the kind that, at a conference, provokes shouts and screams from otherwise calm and collected scientists. Such studies were also very new, recalled neuroscientist Peter T. Fox, a pioneer of such research on language. As a result, many other researchers were dubious of the work as a whole. “I remember getting grants rejected with comments like ‘central brain activation can teach us nothing about language—nothing, it has no role in language studies’,” Fox said. Neuroscientists have two approaches for assessing the role of a particular brain area. Lesion-deficit studies were first performed by ancient Roman physicians, who noticed that damage to regions of the brain caused specific behavioral deficits (e.g. a gladiator gets hit on the right side of their head—the lesion—and loses the ability to move his left arm—the deficit). Activation studies using neuroimaging work the other way: you record where the brain is active while performing a particular task, like reading. (In this sense, Mike’s story, involving both damage and imaging, sits at the intersection of the classical and the modern.) Each method has its strengths and weaknesses. “Activation studies show that a particular brain region is engaged during a task, but it doesn’t show that it’s a necessary and sufficient condition for that task,” Fox explained. “And yet, because the brain has redundant wiring, it’s nearly impossible to claim that a stroke in a particular area, like the visual word form area, is necessary and sufficient to eliminate a particular function.” So even though multiple activation studies reported the visual word form area existed, the behavioral neurologists weren’t convinced. And there were several reasons to think such an area was “a myth,” as one skeptical 2003 paper put it. For one thing, the very search for a “visual word form area” was misguided because it personified the brain’s real work, which is to process and decode visual information. Brain regions act as an assembly line of neural groups that each contribute some cognitive rivet or weld to a larger percept. A visual word form area confused an assembly line for a one-man-band. Finally there was the problem of evolution. Because reading was a relatively new cultural invention, humans couldn’t have evolved to read text in the same way that mammals evolved to recognize faces—there simply hadn’t been enough time. This made it hard to believe in a brain structure expressly devoted to reading. A decade before Mike’s stroke, Turkeltaub had shown that a child’s brain shifts where and how it processes text as he or she learns to read. But because children are also learning to walk, talk, write, and otherwise be human, it was hard to say what brain changes were caused by literacy alone. It took a series of studies with adults learning to read for the first time, to nail down the truth about the visual word form area. At the turn of the 21st century, a large group of Colombian guerrillas abandoned their weapons and, after decades of fighting, rejoined mainstream society. Without formal education, many of the fighters learned to read for the first time as 20-somethings. Camouflaged within this Colombian drama, a group of neuroscientists led by Manuel Carrerias, Scientific Director of Spain’s Basque Center on Cognition, Brain and Language, saw an opportunity to study how learning to read changes the adult brain. Working with Catherine Price, a neuroscientist at University College London, Carrerias used M.R.I. to track changes in the brain as the former guerrillas learned to read. They discovered that learning to read was associated with enlarged grey matter in specific brain areas and further that brain activity within these regions became more tightly coordinated with improved literacy, showing that structural and functional changes occurred simultaneously. Much as high commuter traffic promotes asphalting and widening of thoroughfares to improve traffic flow, brain activity associated with learning how to read promotes the strengthening of specific neural highways, allowing specialized neural centers to more efficiently perform the cognitive task of reading. Carrerias and Price’s findings, published in 2009, fleshed out the brain’s reading networks but didn’t find any evidence supporting the role of a visual word form area. The next major advance in the case for the visual word form area came from Stanislas Dehaene, a neuroscientist at the French Institute of Health and Medical Research. In 2010 Dehaene proposed that reading networks build on evolutionarily older functions at the expense of those functions. To test his hypothesis, Dehaene gathered illiterate adults, people who learned to read in adulthood, and literate adults schooled in childhood. Comparing these groups, Dehaene reported that the more literate a person was, the less responsive the visual word form area became towards other visual stimuli—in particular to faces—and more focused it became towards writing. In poor readers, meanwhile, the visual word form area responded relatively indiscriminately to words, faces, shapes, and checkerboards. As Dehaene predicted, when one learns to read, the left inferior occipitotemporal cortex is recycled from a general visual recognition center to a specialized word recognition center, at the expense of other tasks. Mike’s stroke served as the final piece of the puzzle: a case of pure alexia caused by damage in only the visual word form area. As with most scientific knowledge, the exact role of the visual word form area is still unresolved. However, it is no longer a myth. And the way in which the area is repurposed from general visual recognition to word specialist is a reminder of how powerfully the brain can retool and adapt—essential processes both in learning and healing. Thanks to that brain plasticity, Mike recovered quickly. Guided by Whitney Postman, a speech language therapist and an assistant professor at St. Louis University whose post-doctoral work at the NIH focused on post-stroke aphasia, Mike began tailored therapy to help him overcome his difficulty with function words. She fashioned a “brute force approach” in which he viewed, copied, said, and spelled lists of vocabulary in various contexts. “It’s pretty much the way we learn to use these function words as children,” Postman said. For his part, Mike was a highly motivated pupil. He and Postman had only two sessions. One month after his stroke, his letter-by-letter reading was greatly improved. After three months, he was reading patient charts and lab values at work—and his morning paper.

Discover hidden collaborations