Entity

Time filter

Source Type

Cambridge, MA, United States

News Article
Site: http://news.mit.edu/topic/mitmechanical-engineering-rss.xml

A new study from the Picower Institute for Learning and Memory in the Feb. 4 online edition of Neuron sheds light on the innate plasticity of the adult brain at its most fundamental level — the synapse. Neuron-to-neuron communication that allows the brain to coordinate activity and store new information takes place at synapses. If an outside stimulus doesn’t enact a synaptic change, it doesn’t register — no learning or memory formation takes place. Synapses can be strengthened or weakened, or even added and eliminated in response to new information. Synaptic malfunctions are implicated in certain diseases. A better understanding of how synapses are formed and dismantled in response to external stimuli can help address a wide range of disorders from drug addiction to mental illness. “The key to enabling plasticity in the adult brain is understanding which elements of a brain circuit are changeable and which aren’t, and under what circumstances,” says study author Elly Nedivi, Picower researcher and professor of neuroscience in the MIT Department of Brain and Cognitive Sciences. “The good news is that while parts of the circuit are hard-wired, others are not — they retain a capacity for remodeling.” A neuron is bombarded with signals from hundreds of presynaptic partners. Synapses act as conduits for these incoming signals. Excitatory neurotransmitters flow from the presynaptic to the postsynaptic neuron at synaptic locations that are on bulbous protrusions with a rounded head and thin neck, termed spines. The long branching dendrites of a single neuron can display hundreds of spines like leaves on a tree branch. When spines appear and disappear, a neuron can gain new connections or lose existing ones. "If spines disappear, they rarely come back to the same location; new spines seek out alternative locations," says biology graduate student Katherine Villa, co-first author on the study. "It’s as if after deciding that a connection is not worth keeping neurons will try to replace it with a different contact." Using cutting-edge imaging techniques developed in collaboration with Peter So, MIT professor of mechanical and biological engineering, Nedivi’s team tracked the daily dynamics of all the dendritic spines on a single neuron in the living mouse brain, as well as all the excitatory and inhibitory synapses on these neurons. The ability to label inhibitory synapses in live animals is fairly recent; even today, it is notoriously difficult to witness excitatory and inhibitory synapses working side by side. Directly visualizing inhibitory synapses revealed the surprising fact that while many reside on the shaft of dendritic branches, approximately 30 percent reside on dendritic spines alongside excitatory synapses. Another surprise was that when inhibitory synapses are removed, they return again and again to the same location. “Clearly, the goal here is not to change partners as we see for excitatory connections,” says biology graduate student Kalen Berry, Villa’s co-first author. “We think that inhibitory synapses can act as a kind of gatekeeper, flickering on and off to shut down excitatory connections as needed.” Interestingly, the dual-purpose spines are large and extremely stable, as are the excitatory connections onto them. “This is essentially a hard-wired part of the circuit,” Nedivi says. “But we still have the potential to modify it via the nearby inhibitory synapse.” These findings raise questions about why some excitatory connections on singly innervated spines can be restructured while those on dually innervated spines cannot. How does the structural plasticity of inhibitory synapses alter excitatory circuit properties, and what enables their rapid insertion and removal at stable sites? The answers to these questions could shed light on ways to enhance plasticity in the adult brain and synapse-related disorders. This work was supported by the National Eye Institute.


News Article
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

When the brain forms memories or learns a new task, it encodes the new information by tuning connections between neurons. Massachusetts Institute of Technology (MIT) neuroscientists have discovered a novel mechanism that contributes to the strengthening of these connections, also called synapses. At each synapse, a presynaptic neuron sends chemical signals to one or more postsynaptic receiving cells. In most previous studies of how these connections evolve, scientists have focused on the role of the postsynaptic neurons. However, the MIT team has found that presynaptic neurons also influence connection strength. “This mechanism that we’ve uncovered on the presynaptic side adds to a toolkit that we have for understanding how synapses can change,” says Troy Littleton, a professor in the departments of Biology and Brain and Cognitive Sciences at MIT, a member of MIT’s Picower Institute for Learning and Memory, and the senior author of the study, which appears in Neuron. Learning more about how synapses change their connections could help scientists better understand neurodevelopmental disorders such as autism, since many of the genetic alterations linked to autism are found in genes that code for synaptic proteins. Richard Cho, a research scientist at the Picower Institute, is the paper’s lead author. Rewiring the brain One of the biggest questions in the field of neuroscience is how the brain rewires itself in response to changing behavioral conditions—an ability known as plasticity. This is particularly important during early development but continues throughout life as the brain learns and forms new memories. Over the past 30 years, scientists have found that strong input to a postsynaptic cell causes it to traffic more receptors for neurotransmitters to its surface, amplifying the signal it receives from the presynaptic cell. This phenomenon, known as long-term potentiation (LTP), occurs following persistent, high-frequency stimulation of the synapse. Long-term depression (LTD), a weakening of the postsynaptic response caused by very low-frequency stimulation, can occur when these receptors are removed. Scientists have focused less on the presynaptic neuron’s role in plasticity, in part because it is more difficult to study, Littleton says. His lab has spent several years working out the mechanism for how presynaptic cells release neurotransmitter in response to spikes of electrical activity known as action potentials. When the presynaptic neuron registers an influx of calcium ions, carrying the electrical surge of the action potential, vesicles that store neurotransmitters fuse to the cell’s membrane and spill their contents outside the cell, where they bind to receptors on the postsynaptic neuron. The presynaptic neuron also releases neurotransmitter in the absence of action potentials, in a process called spontaneous release. These “minis” have previously been thought to represent noise occurring in the brain. However, Littleton and Cho found that minis could be regulated to drive synaptic structural plasticity. To investigate how synapses are strengthened, Littleton and Cho studied a type of synapse known as neuromuscular junctions, in fruit flies. The researchers stimulated the presynaptic neurons with a rapid series of action potentials over a short period of time. As expected, these cells released neurotransmitter synchronously with action potentials. However, to their surprise, the researchers found that mini events were greatly enhanced well after the electrical stimulation had ended. “Every synapse in the brain is releasing these mini events, but people have largely ignored them because they only induce a very small amount of activity in the postsynaptic cell,” Littleton says. “When we gave a strong activity pulse to these neurons, these mini events, which are normally very low-frequency, suddenly ramped up and they stayed elevated for several minutes before going down.” Synaptic growth The enhancement of minis appears to provoke the postsynaptic neuron to release a signaling factor, still unidentified, that goes back to the presynaptic cell and activates an enzyme called PKA. This enzyme interacts with a vesicle protein called complexin, which normally acts as a brake, clamping vesicles to prevent release neurotransmitter until it’s needed. Stimulation by PKA modifies complexin so that it releases its grip on the neurotransmitter vesicles, producing mini events. When these small packets of neurotransmitter are released at elevated rates, they help stimulate growth of new connections, known as boutons, between the presynaptic and postsynaptic neurons. This makes the postsynaptic neuron even more responsive to any future communication from the presynaptic neuron. “Typically you have 70 or so of these boutons per cell, but if you stimulate the presynaptic cell you can grow new boutons very acutely. It will double the number of synapses that are formed,” Littleton says. The researchers observed this process throughout the flies’ larval development, which lasts three to five days. However, Littleton and Cho demonstrated that acute changes in synaptic function could also lead to synaptic structural plasticity during development. “Machinery in the presynaptic terminal can be modified in a very acute manner to drive certain forms of plasticity, which could be really important not only in development, but also in more mature states where synaptic changes can occur during behavioral processes like learning and memory,” Cho says.


News Article
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

Strengthening and weakening the connections between neurons, known as synapses, is vital to the brain’s development and everyday function. One way that neurons weaken their synapses is by swallowing up receptors on their surfaces that normally respond to glutamate, one of the brain’s excitatory chemicals. In a new study, MIT neuroscientists have detailed how this receptor reabsorption takes place, allowing neurons to get rid of unwanted connections and to dampen their sensitivity in cases of overexcitation. “Pulling in and putting out receptors is a dynamic process, and it’s highly regulated by a neuron’s environment,” said Elly Nedivi, a professor of brain and cognitive sciences and member of MIT’s Picower Institute for Learning and Memory. “Our understanding of how receptors are pulled in and how regulatory pathways impact that has been quite poor.” Nedivi and colleagues found that a protein known as CPG2 is key to this regulation, which is notable because mutations in the human version of CPG2 have been previously linked to bipolar disorder. “This sets the stage for testing various human mutations and their impact at the cellular level,” said Nedivi, who is the senior author of a Jan. 14 Current Biology paper describing the findings. The paper’s lead author is former Picower Institute postdoc Sven Loebrich. Other authors are technical assistant Marc Benoit, recent MIT graduate Jaclyn Konopka, former postdoc Joanne Gibson, and Jeffrey Cottrell, the director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute. Neurons communicate at synapses via neurotransmitters such as glutamate, which flow from the presynaptic to the postsynaptic neuron. This communication allows the brain to coordinate activity and store information such as new memories. Previous studies have shown that postsynaptic cells can actively pull in some of their receptors in a phenomenon known as long-term depression (LTD). This important process allows cells to weaken and eventually eliminate poor connections, as well as to recalibrate their set point for further excitation. It can also protect them from overexcitation by making them less sensitive to an ongoing stimulus. Pulling in receptors requires the cytoskeleton, which provides the physical power, and a specialized complex of proteins known as the endocytic machinery. This machinery performs endocytosis — the process of pulling in a section of the cell membrane in the form of a vesicle, along with anything attached to its surface. At the synapse, this process is used to internalize receptors. Until now, it was unknown how the cytoskeleton and the endocytic machinery were linked. In the new study, Nedivi’s team found that the CPG2 protein forms a bridge between the cytoskeleton and the endocytic machinery. “CPG2 acts like a tether for the endocytic machinery, which the cytoskeleton can use to pull in the vesicles,” Nedivi said. “The glutamate receptors that are in the membrane will get pinched off and internalized.” They also found that CPG2 binds to the endocytic machinery through a protein called EndoB2. This CPG2-EndoB2 interaction occurs only during receptor internalization provoked by synaptic stimulation and is distinct from the constant recycling of glutamate receptors that also occurs in cells. Nedivi’s lab has previously shown that this process, which does not change the cells’ overall sensitivity to glutamate, is also governed by CPG2. “This study is intriguing because it shows that by engaging different complexes, CPG2 can regulate different types of endocytosis,” said Linda Van Aelst, a professor at Cold Spring Harbor Laboratory who was not involved in the research. When synapses are too active, it appears that an enzyme called protein kinase A (PKA) binds to CPG2 and causes it to launch activity-dependent receptor absorption. CPG2 may also be controlled by other factors that regulate PKA, including hormone levels, Nedivi said. In 2011, a large consortium including researchers from the Broad Institute discovered that a gene called SYNE1 is number two on the hit list of genes linked to susceptibility for bipolar disorder. They were excited to find that this gene encoded CPG2, a regulator of glutamate receptors, given prior evidence implicating these receptors in bipolar disorder. In a study published in December, Nedivi and colleagues, including Loebrich and co-lead author Mette Rathje, identified and isolated the human messenger RNA that encodes CPG2. They showed that when rat CPG2 was knocked out, its function could be restored by the human version of the protein, suggesting both versions have the same cellular function. Rathje, a Picower Institute postdoc in Nedivi’s lab, is now studying mutations in human CPG2 that have been linked to bipolar disorder. She is testing their effect on synaptic function in rats, in hopes of revealing how those mutations might disrupt synapses and influence the development of the disorder. Nedivi suspects that CPG2 is one player in a constellation of genes that influence susceptibility to bipolar disorder. “My prediction would be that in the general population there’s a range of CPG2 function, in terms of efficacy,” Nedivi said. “Within that range, it will depend what the rest of the genetic and environmental constellation is, to determine whether it gets to the point of causing a disease state.” The research was funded by the Picower Institute Innovation Fund and the Gail Steel Fund for Bipolar Research.


News Article
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

Sleep is usually considered an all-or-nothing state: The brain is either entirely awake or entirely asleep. However, MIT neuroscientists have discovered a brain circuit that can trigger small regions of the brain to fall asleep or become less alert, while the rest of the brain remains awake. This circuit originates in a brain structure known as the thalamic reticular nucleus (TRN), which relays signals to the thalamus and then the brain’s cortex, inducing pockets of the slow, oscillating brain waves characteristic of deep sleep. Slow oscillations also occur during coma and general anesthesia, and are associated with decreased arousal. With enough TRN activity, these waves can take over the entire brain. The researchers believe the TRN may help the brain consolidate new memories by coordinating slow waves between different parts of the brain, allowing them to share information more easily. “During sleep, maybe specific brain regions have slow waves at the same time because they need to exchange information with each other, whereas other ones don’t,” said Laura Lewis, a research affiliate in MIT’s Department of Brain and Cognitive Sciences and one of the lead authors of the new study, which appears in the journal eLife. The TRN may also be responsible for what happens in the brain when sleep-deprived people experience brief sensations of “zoning out” while struggling to stay awake, the researchers say. The paper’s other first author is Jakob Voigts, an MIT graduate student in brain and cognitive sciences. Senior authors are Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience at MIT and an anesthesiologist at Massachusetts General Hospital, and Michael Halassa, an assistant professor at New York University. Other authors are MIT research affiliate Francisco Flores and Matthew Wilson, the Sherman Fairchild Professor in Neurobiology and a member of MIT’s Picower Institute for Learning and Memory. Until now, most sleep research has focused on global control of sleep, which occurs when the entire brain is awash in slow waves — oscillations of brain activity created when sets of neurons are silenced for brief periods. However, recent studies have shown that sleep-deprived animals can exhibit slow waves in parts of their brain while they are still awake, suggesting that the brain can also control alertness at a local level. The MIT team began its investigation of local control of alertness or drowsiness with the TRN because its physical location makes it perfectly positioned to play a role in sleep, Lewis said. The TRN surrounds the thalamus like a shell and can act as a gatekeeper for sensory information entering the thalamus, which then sends information to the cortex for further processing. Using optogenetics, a technique that allows scientists to stimulate or silence neurons with light, the researchers found that if they weakly stimulated the TRN in awake mice, slow waves appeared in a small part of the cortex. With more stimulation, the entire cortex showed slow waves. “We also found that when you induce these slow waves across the cortex, animals start to behaviorally act like they’re drowsy. They’ll stop moving around, their muscle tone will go down,” Lewis said. The researchers believe the TRN fine-tunes the brain’s control over local brain regions, enhancing or reducing slow waves in certain regions so those areas can communicate with each other, or inducing some areas to become less alert when the brain is very drowsy. This may explain what happens in humans when they are sleep-deprived and momentarily zone out without really falling asleep. “I’m inclined to think that happens because the brain begins to transition into sleep, and some local brain regions become drowsy even if you force yourself to stay awake,” Lewis said. “The strength of this paper is that it’s the first to use optogenetics to try to dissect the role of part of the thalamo-cortical circuitry in generating slow waves in the cortex,” said Mark Opp, a professor of anesthesiology and pain medicine at the University of Washington who was not part of the research team. Understanding how the brain controls arousal could help researchers design new sleep and anesthetic drugs that create a state more similar to natural sleep. Stimulating the TRN can induce deep, non-REM-like sleep states, and previous research by Brown and colleagues uncovered a circuit that turns on REM sleep. Brown adds, “The TRN is rich in synapses — connections in the brain — that release the inhibitory neurotransmitter GABA. Therefore, the TRN is almost certainly a site of action of many anesthetic drugs, given that a large classes of them act at these synapses and produce slow waves as one of their characteristic features.” Previous work by Lewis and colleagues has shown that unlike the slow waves of sleep, the slow waves under general anesthesia are not coordinated, suggesting a mechanism for why these drugs impair information exchange in the brain and produce unconsciousness.


News Article
Site: http://www.biosciencetechnology.com/rss-feeds/all/rss.xml/all

When you hold in mind a sentence you have just read or a phone number you’re about to dial, you’re engaging a critical brain system known as working memory. For the past several decades, neuroscientists have believed that as information is held in working memory, brain cells associated with that information fire continuously. However, a new study from MIT has upended that theory, instead finding that as information is held in working memory, neurons fire in sporadic, coordinated bursts. These cyclical bursts could help the brain to hold multiple items in working memory at the same time, according to the researchers. “By having these different bursts coming at different moments in time, you can keep different items in memory separate from one another,” said Earl Miller, the Picower Professor in MIT’s Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. Miller is the senior author of the study, which appears in the March 17 issue of Neuron. Mikael Lundqvist, a Picower Institute postdoc, and Jonas Rose, now at University of Tubingen in Germany, are the paper’s lead authors. Starting in the early 1970s, experiments showed that when an item is held in working memory, a subset of neurons fires continuously. However, these and subsequent studies of working memory averaged the brain’s activity over seconds or even minutes of performing the task, Miller said. “The problem with that is, that’s not the way the brain works,” he said. “We looked more closely at this activity, not by averaging across time, but from looking from moment to moment. That revealed that something way more complex is going on.” Miller and his colleagues recorded neuron activity in animals as they were shown a sequence of three colored squares, each in a different location. Then, the squares were shown again, but one of them had changed color. The animals were trained to respond when they noticed the square that had changed color — a task requiring them to hold all three squares in working memory for about two seconds. The researchers found that as items were held in working memory, ensembles of neurons in the prefrontal cortex were active in brief bursts, and these bursts only occurred in recording sites in which information about the squares was stored. The bursting was most frequent at the beginning of the task, when the information was encoded, and at the end, when the memories were read out. The findings fit well with a model that Lundqvist had developed as an alternative to the model of sustained activity as the neural basis of working memory. According to the new model, information is stored in rapid changes in the synaptic strength of the neurons. The brief bursts serve to “imprint” information in the synapses of these neurons, and the bursts reoccur periodically to reinforce the information as long as it is needed. The bursts create waves of coordinated activity in the gamma frequency (45 to 100 hertz), like the ones that were observed in the data. These waves occur sporadically, with gaps between them, and each ensemble of neurons, encoding a specific item, produces a different burst of gamma waves. “It’s like a fingerprint,” Lundqvist said. When this activity is averaged over several repeated trials, it appears as a smooth curve of continuous activity, just as the older models of working memory suggested. However, the MIT team’s new way of measuring and analyzing the data suggests that the full picture is much different. “It’s like for years you’ve been listening to music from your neighbor’s apartment and all you can hear is the thumping bass part. You’re missing all the details, but if you get close enough to it you see there’s a lot more going on,” Miller said. The findings suggest that it would be worthwhile to look for this kind of cyclical activity in other cognitive functions such as attention, the researchers say. Oscillations like those seen in this study may help the brain to package information and keep it separate so that different pieces of information don’t interfere with each other. “Your brain operates in a very sporadic, periodic way, with lots of gaps in between the information the brain represents,” Miller said. “The mind is papering over all the gaps and bubbly dynamics and giving us an impression that things are happening in a smooth way, when our brain is actually working in a very periodic fashion, sending packets of information around.” Robert Knight, a professor of psychology and neuroscience at the University of California at Berkeley, said the new study “provides compelling evidence that nonlinear oscillatory dynamics underlie prefrontal dependent working memory capacity.” “The work calls for a new view of the computational processes supporting goal-directed behavior,” adds Knight, who was not involved in the research. “The control processes supporting nonlinear dynamics are not understood, but this work provides a critical guidepost for future work aimed at understanding how the brain enables fluid cognition.”

Discover hidden collaborations