Helen Wills Neuroscience Institute
Helen Wills Neuroscience Institute
Kayser A.S.,University of California at San Francisco |
Kayser A.S.,VA Northern California Health Care System |
D'Esposito M.,VA Northern California Health Care System |
D'Esposito M.,Helen Wills Neuroscience Institute |
D'Esposito M.,University of California at Berkeley
Cerebral Cortex | Year: 2013
Learning progressively more abstract stimulus-response mappings requires progressively more anterior regions of the lateral frontal cortex. Using an individual differences approach, we studied subjects with frontal lesions performing a hierarchical reinforcement-learning task to investigate how frontal cortex contributes to abstract rule learning. We predicted that subjects with lesions of the left pre-premotor (pre-PMd) cortex, a region implicated in abstract rule learning, would demonstrate impaired acquisition of second-order, as opposed to first-order, rules. We found that 4 subjects with such lesions did indeed demonstrate a second-order rule-learning impairment, but that these subjects nonetheless performed better than subjects with other frontal lesions in a second-order rule condition. This finding resulted from both their restricted exploration of the feature space and the task structure of this condition, for which they identified partially representative first-order rules. Significantly, across all subjects, suboptimal but above-chance performance in this condition correlated with increasing disconnection of left pre-PMd from the putative functional hierarchy, defined by reduced functional connectivity between left pre-PMd and adjacent nodes. These findings support the theory that activity within lateral frontal cortex shapes the search for relevant stimulus-response mappings, while emphasizing that the behavioral correlate of impairments depends critically on task structure. © 2012 The Author.
Thomas A.W.,Helen Wills Neuroscience Graduate Program |
Caporale N.,University of California at Berkeley |
Caporale N.,Helen Wills Neuroscience Institute |
Wu C.,University of California at Irvine |
And 2 more authors.
Developmental Cognitive Neuroscience | Year: 2016
Early life adversity is associated with increased risk for mental and physical health problems, including substance abuse. Changes in neural development caused by early life insults could cause or complicate these conditions. Maternal separation (MS) is a model of early adversity for rodents. Clear effects of MS have been shown on behavioral flexibility in rats, but studies of effects of MS on cognition in mice have been mixed. We hypothesized that previous studies focused on adult mice may have overlooked a developmental transition point when juvenile mice exhibit greater flexibility in reversal learning. Here, using a 4-choice reversal learning task we find that early MS leads to decreased flexibility in post-weaning juvenile mice, but no significant effects in adults. In a further study of voluntary ethanol consumption, we found that adult mice that had experienced MS showed greater cumulative 20% ethanol consumption in an intermittent access paradigm compared to controls. Our data confirm that the MS paradigm can reduce cognitive flexibility in mice and may enhance risk for substance abuse. We discuss possible interpretations of these data as stress-related impairment or adaptive earlier maturation in response to an adverse environment. © 2015 The Authors.
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 3.94M | Year: 2014
The achievements of modern research and their rapid progress from theory to application are increasingly underpinned by computation. Computational approaches are often hailed as a new third pillar of science - in addition to empirical and theoretical work. While its breadth makes computation almost as ubiquitous as mathematics as a key tool in science and engineering, it is a much younger discipline and stands to benefit enormously from building increased capacity and increased efforts towards integration, standardization, and professionalism. The development of new ideas and techniques in computing is extremely rapid, the progress enabled by these breakthroughs is enormous, and their impact on society is substantial: modern technologies ranging from the Airbus 380, MRI scans and smartphone CPUs could not have been developed without computer simulation; progress on major scientific questions from climate change to astronomy are driven by the results from computational models; major investment decisions are underwritten by computational modelling. Furthermore, simulation modelling is emerging as a key tool within domains experiencing a data revolution such as biomedicine and finance. This progress has been enabled through the rapid increase of computational power, and was based in the past on an increased rate at which computing instructions in the processor can be carried out. However, this clock rate cannot be increased much further and in recent computational architectures (such as GPU, Intel Phi) additional computational power is now provided through having (of the order of) hundreds of computational cores in the same unit. This opens up potential for new order of magnitude performance improvements but requires additional specialist training in parallel programming and computational methods to be able to tap into and exploit this opportunity. Computational advances are enabled by new hardware, and innovations in algorithms, numerical methods and simulation techniques, and application of best practice in scientific computational modelling. The most effective progress and highest impact can be obtained by combining, linking and simultaneously exploiting step changes in hardware, software, methods and skills. However, good computational science training is scarce, especially at post-graduate level. The Centre for Doctoral Training in Next Generation Computational Modelling will develop 55+ graduate students to address this skills gap. Trained as future leaders in Computational Modelling, they will form the core of a community of computational modellers crossing disciplinary boundaries, constantly working to transfer the latest computational advances to related fields. By tackling cutting-edge research from fields such as Computational Engineering, Advanced Materials, Autonomous Systems and Health, whilst communicating their advances and working together with a world-leading group of academic and industrial computational modellers, the students will be perfectly equipped to drive advanced computing over the coming decades.
Bavelier D.,University of Rochester |
Levi D.M.,Helen Wills Neuroscience Institute |
Li R.W.,Helen Wills Neuroscience Institute |
Dan Y.,Howard Hughes Medical Institute |
Hensch T.K.,Harvard University
Journal of Neuroscience | Year: 2010
Adult brain plasticity, although possible, remains more restricted in scope than during development. Here, we address conditions under which circuit rewiring may be facilitated in the mature brain. At a cellular and molecular level, adult plasticity is actively limited. Some of these "brakes" are structural, such as perineuronal nets or myelin, which inhibit neurite outgrowth. Others are functional, acting directly upon excitatory-inhibitory balance within local circuits. Plasticity in adulthood can be induced either by lifting these brakes through invasive interventions or by exploiting endogenous permissive factors, such as neuromodulators. Using the amblyopic visual system as a model, we discuss genetic, pharmacological, and environmental removal of brakes to enable recovery of vision in adult rodents. Although these mechanisms remain largely uncharted in the human, we consider how they may provide a biological foundation for the remarkable increase in plasticity after action video game play by amblyopic subjects. Copyright © 2010 the authors.
News Article | December 20, 2016
When you're suddenly able to understand someone despite their thick accent, or finally make out the lyrics of a song, your brain appears to be re-tuning to recognize speech that was previously incomprehensible. University of California, Berkeley, neuroscientists have now observed this re-tuning in action by recording directly from the surface of a person's brain as the words of a previously unintelligible sentence suddenly pop out after the subject is told the meaning of the garbled speech. The re-tuning takes place within a second or less, they found. The observations confirm speculation that neurons in the auditory cortex that pick out aspects of sound associated with language - the components of pitch, amplitude and timing that distinguish words or smaller sound bits called phonemes - continually tune themselves to pull meaning out of a noisy environment. "The tuning that we measured when we replayed the garbled speech emphasizes features that are present in speech," said first author and UC Berkeley graduate student Chris Holdgraf. "We believe that this tuning shift is what helps you 'hear' the speech in that noisy signal. The speech sounds actually pop out from the signal." Such pop-outs happen all the time: when you learn to hear the words of a foreign language, for example, or latch onto a friend's conversation in a noisy bar. Or visually, when someone points out a number in what seems like a jumbled mass of colored dots, and somehow you cannot un-see that number. "Something is changing in the auditory cortex to emphasize anything that might be speech-like, and increasing the gain for those features, so that I actually hear that sound in the noise," said co-author Frédéric Theunissen, a UC Berkeley professor of psychology and a member of the Helen Wills Neuroscience Institute. "It's not like I am generating those words in my head. I really have the feeling of hearing the words in the noise with this pop-out phenomenon. It is such a mystery." "It is unbelievable how fast and plastic the brain is," added co-author Robert Knight, a UC Berkeley professor of psychology and Helen Wills Institute researcher. "In seconds or less, the electrical activity in the brain changes its response properties to pull out linguistic information. Behaviorally, this is a classic phenomenon, but this is the first time we have any evidence on how it actually works in humans." The findings will aid Knight and his colleagues in their quest to develop a speech decoder: a device implanted in the brain that would interpret people's imagined speech and help speechless patients, such as those paralyzed by Lou Gehrig's disease, communicate. Holdgraf, Knight, Theunissen and their colleagues will report their findings Dec. 20 in the journal Nature Communications. Working with epilepsy patients who had pieces of their skull removed and electrodes placed on the brain surface to track seizures - what is known as electrocorticography - Holdgraf presented seven subjects with a simple auditory test. He first played a highly garbled sentence, which almost no one initially understood. He then played a normal, easy to understand version of the sentence, and then immediately repeated the garbled version. Almost everyone understood the sentence the second time around, even though they initially found it unintelligible. The electrodes on the brain surface recorded major changes in neuronal activity before and after. When the garbled sentence was first played, activity in the auditory cortex as measured by the 468 electrodes was small. The brain could hear the sound, but couldn't do much with it, Knight said. When the clear sentence was played, the electrodes, as expected, recorded a pattern of neural activity consistent with the brain tuning into language. When the garbled sentence was played a second time, the electrodes recorded nearly the same language-appropriate neural activity, as if the underlying neurons had re-tuned to pick out words or parts of words. "They respond as if they were hearing unfiltered normal speech," Holdgraf said. "It changes the pattern of activity in the brain such that there is information there that wasn't there before. That information is this unfiltered speech." "Normal language activates tuning properties that are related to extraction of meaning and phonemes in the language," Knight said. "Here, after you primed the brain with the unscrambled sentence, the tuning to the scrambled speech looked like the tuning to language, which allows the brain to extract meaning out of noise." This trick is a testament to the brain's ability to automatically pick and choose information from a noisy and overwhelming environment, focusing only on what's relevant to a situation and discarding the rest. "Your brain tries to get around the problem of too much information by making assumptions about the world," Holdgraf said. "It says, 'I am going to restrict the many possible things I could pull out from an auditory stimulus so that I don't have to do a lot of processing.' By doing that, it is faster and expends less energy." That means, though, that noisy or garbled sound can be hard to interpret. Holdgraf and his colleagues showed how quickly the brain can be primed to tune in language. The neurons from which they recorded activity were not tuned to a single frequency, like a radio, Theunissen said. Rather, neurons in the upper levels of the auditory cortex respond to more complex aspects of sound, such as changes in frequency and amplitude - spectro-temporal modulation that we perceive as pitch, timbre and rhythm. While similar studies in animals, such as ferrets, have shown that neurons change how they filter or tune into a specific type of spectro-temporal modulation, the new results are the first in humans, and show a more rapid shift to process human language than has been seen in animals, he said. The researchers used an analysis technique first used by Theunissen to determine which complex characteristics of natural sound, like speech, neurons in the higher levels of the auditory cortex respond to, with a particular focus on songbird language. This is the first time the technique has been applied to humans to study how receptive fields change in neurons in the auditory cortex. Co-authors of the paper are Wendy de Heer of UC Berkeley's psychology department, postdoctoral fellows Brian Pasley and Jochem Rieger of the Helen Wills Neuroscience Institute, and neurologists Nathan Crone of Johns Hopkins School of Medicine in Baltimore, and Jack Lin of the UC Irvine Comprehensive Epilepsy Program. The work was supported by a graduate fellowship from the National Institute of Neurological Diseases and Stroke (NINDS 2R37NS021135, NIDCD R01 007293) and the Nielsen Corporation.
Lara A.H.,Helen Wills Neuroscience Institute |
Wallis J.D.,Helen Wills Neuroscience Institute |
Wallis J.D.,University of California at Berkeley
Journal of Vision | Year: 2012
Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys' VSTM capacity. Subjects' performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrades the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM. © ARVO.
Sohl-Dickstein J.,Biophysics Graduate Group |
Sohl-Dickstein J.,University of California at Berkeley |
Battaglino P.B.,Stanford University |
Battaglino P.B.,University of California at Berkeley |
And 3 more authors.
Physical Review Letters | Year: 2011
Fitting probabilistic models to data is often difficult, due to the general intractability of the partition function. We propose a new parameter fitting method, minimum probability flow (MPF), which is applicable to any parametric model. We demonstrate parameter estimation using MPF in two cases: a continuous state space model, and an Ising spin glass. In the latter case, MPF outperforms current techniques by at least an order of magnitude in convergence time with lower error in the recovered coupling parameters. © 2011 American Physical Society.
Cameron P.,Helen Wills Neuroscience Institute |
Hiroi M.,Helen Wills Neuroscience Institute |
Ngai J.,Helen Wills Neuroscience Institute |
Scott K.,Helen Wills Neuroscience Institute |
Scott K.,Howard Hughes Medical Institute
Nature | Year: 2010
The detection of water and the regulation of water intake are essential for animals to maintain proper osmotic homeostasis. Drosophila and other insects have gustatory sensory neurons that mediate the recognition of external water sources, but little is known about the underlying molecular mechanism for water taste detection. Here we identify a member of the degenerin/epithelial sodium channel family, PPK28, as an osmosensitive ion channel that mediates the cellular and behavioural response to water. We use molecular, cellular, calcium imaging and electrophysiological approaches to show that ppk28 is expressed in water-sensing neurons, and that loss of ppk28 abolishes water sensitivity. Moreover, ectopic expression of ppk28 confers water sensitivity to bitter-sensing gustatory neurons in the fly and sensitivity to hypo-osmotic solutions when expressed in heterologous cells. These studies link an osmosensitive ion channel to water taste detection and drinking behaviour, providing the framework for examining the molecular basis for water detection in other animals. © 2010 Macmillan Publishers Limited. All rights reserved.
Agarwal G.,Helen Wills Neuroscience Institute |
Isacoff E.,Helen Wills Neuroscience Institute |
Isacoff E.,University of California at Berkeley |
Isacoff E.,Lawrence Berkeley National Laboratory
Journal of Neurophysiology | Year: 2011
Insect pheromonal glomeruli are thought to track the fine spatiotemporal features of one or a few odorants to aid conspecific localization. However, it is not clear whether they function differently from generalist glomeruli, which respond to many odorants. In this study, we test how DA1, a model pheromonal glomerulus in the fruit fly, represents the spatial and temporal properties of its input, compared with other glomeruli. We combine calcium imaging and electrical stimulation in an isolated brain preparation for a simultaneous, unbiased comparison of the functional organization of many glomeruli. In contrast to what is found in other glomeruli, we find that ipsilateral and contralateral stimuli elicit distinct spatial patterns of activity within DA1. DA1's output shows a greater preference for ipsilateral stimuli in males than in females. DA1 experiences greater and more rapid inhibition than other glomeruli, allowing it to report slight interantennal delays in stimulus onset in a "winner-take-all" manner. DA1's ability to encode spatiotemporal input features distinguishes it from other glomeruli in the fruit fly antennal lobe but relates it to pheromonal glomeruli in other insect species. We propose that DA1 is specialized to help the fly localize and orient with respect to pheromone sources. © 2011 the American Physiological Society.
Miller E.W.,Helen Wills Neuroscience Institute
Current Opinion in Chemical Biology | Year: 2016
Voltage imaging has the potential to unravel the contributions that rapid changes in membrane voltage make to cellular physiology, especially in the context of neuroscience. In particular, small molecule fluorophores are especially attractive because they can, in theory, provide fast and sensitive measurements of membrane potential dynamics. A number of classes of small molecule voltage indicators will be discussed, including dyes with improved two-photon voltage sensing, near infrared optical profiles for use in in vivo applications, and newly developed electron-transfer based indicators, or VoltageFluors, that can be tuned across a range of wavelengths to enable all-optical voltage manipulation and measurement. Limitations and a 'wish-list' for voltage indicators will also be discussed. © 2016 Elsevier Ltd.