The Allen Institute for Brain Science is a Seattle-based independent, nonprofit medical research organization dedicated to accelerating the understanding of how the human brain works. The Allen Institute promotes the advance of brain research by providing free data and tools to scientists worldwide with the aim of catalyzing discovery in disparate research programs and disease areas.Started with $100 million in seed money from philanthropist Paul Allen in 2003, the Institute tackles projects at the leading edge of science—far-reaching projects at the intersection of biology and technology. The resulting data create free, publicly available resources that fuel discovery for countless researchers. Wikipedia.
News Article | October 28, 2016
The first peer-to-peer leadership network focused on executives of innovation and technology growth companies is launching in the Seattle region. Supported by top Seattle area technology leaders, the iInnovate Leadership Network, is led by veteran tech entrepreneur, investor, and management consultant, Randall “Joe” Ottinger. The idea for iInnovate came out of a study of the Puget Sound Tech Ecosystem that Ottinger conducted with students from University of Washington’s Foster School of Business. iInnovate focuses on companies scaling their businesses, and compliments incubators, accelerators, and Angel networks whose emphasis is primarily on start-ups. “I founded the ilnnovate Leadership Network to support the unique challenges that CEOs, Presidents, Founders, and other Chief Operating Officers of innovation and technology-enabled businesses face as they grow their companies. I have benefitted from being part of a CEO network where I could be completely open and honest about the challenges of my business. However, the network I joined did not benefit me much as a technology business leader, which led to the idea of iInnovate. I believe it will be a game changer for technology business leaders, and for communities that want to see new businesses grow and succeed,” said Ottinger. iInnovate is supported by a world-class advisory board of leaders in the Puget Sound, that includes: “As a CEO, I have experienced directly the power of group peer mentoring. As an executive coach, I’ve witnessed the outcomes that are made possible by a more private effort. It’s an honor to help iInnovate build better leaders through peer mentoring and private coaching,” says WTIA CEO Michael Schutzler. To qualify for membership in ilnnovate members must be the CEO, founder, President, or COO of a technology-enabled company that has raised enough money to grow and scale its company. “Serving as CEO of a private technology company can be one of the loneliest jobs around – iInnovate offers a unique opportunity for executives to confidentially share with and support each other,” added Wilson Sonsini Partner Craig Sherman. During the last 20 years, the Puget Sound region has grown from an aerospace innovation leader thanks to Boeing, to a diverse technology community with next-generation technology leaders such as Microsoft and Amazon, and healthcare leaders such as Fred Hutchinson Cancer Research Center, Allen Institute for Brain Science, and the Institute for Systems Biology. In addition, bolstered by the University of Washington, Puget Sound has also built a top tier startup community with the likes of accelerators such as Techstars, 9MileLabs, and UW’s Comotion incubator. With all of its success, however, less than 10% of the 14,000 technology companies in Washington State have more than 20 employees. While there have been notable home grown successes such as F5 Networks, Zillow, and Expedia, which have gone public, there is an opportunity to help more tech companies grow and scale, which is the mission of iInnovate. “Seattle has so much talent but it is very hard anywhere to build a great network to support the successful leap from great team and great product to great business. iInnovate aims to create that network in a highly effective and scalable way. I’m really pleased that Seattle is the birthplace of this great effort,” said Heather Redman, Chair-Elect of the Greater Seattle Chamber of Commerce. iInnovate helps members grow successful businesses through a combination of professionally moderated forum groups, coaching, and a business support network. Forum groups of up to 14 tech leaders are led by “iLeaders”, who are experienced tech business leaders, facilitators and coaches. They help to create the environment for members to learn from each other and knowledge experts. Ottinger envisions the mission of iInnovate as broader than building a network for tech entrepreneurs, and believes that by helping entrepreneurial business leaders he can improve communities and the lives of stakeholders where they work and live. Ottinger expands on this topic in his recently published book about entrepreneurship and the innovation economy, called iInnovate – A guide for engaging in the innovation economy. About ilnnovate Leadership Network Based in Seattle, Washington iInnovate is a C-Level peer-to-peer network for business innovators and tech entrepreneurs. Its purpose it to help members grow successful businesses through a combination of professionally moderated forum groups, education, and a business support network that creates opportunities for members to learn from each other, professional coaches, and knowledge experts. Founder and CEO Randy Ottinger is a technology entrepreneur, author, and thought leader in the areas of business leadership and innovation. He has advised CEOs and facilitated CEO groups for over 20 years. For more information contact: Melissa Milburn, melissa(at)iInnovateNetwork(dot)com and please visit http://iInnovateNetwork.com.
News Article | November 22, 2016
Three years ago the White House launched the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative to accelerate the development and application of novel technologies that will give us a better understanding about how brains work. Since then, dozens of technology firms, academic institutions, scientists and other have been developing new tools to give researchers unprecedented opportunities to explore how the brain processes, utilizes, stores and retrieves information. But without a coherent strategy to analyze, manage and understand the data generated by these new technologies, advancements in the field will be limited. This is precisely why Lawrence Berkeley National Laboratory (Berkeley Lab) Computational Neuroscientist Kristofer Bouchard assembled an international team of interdisciplinary researchers--including mathematicians, computer scientists, physicists and experimental and computational neuroscientists--to develop a plan for managing, analyzing and sharing neuroscience data. Their recommendations were published in a recent issue of Neuron. "The U.S. BRAIN Initiative is just one of many national and private neuroscience initiatives globally that are working toward accelerating our understanding of brains," says Bouchard. "Many of these efforts have given a lot of attention to the technological challenges of measuring and manipulating neural activity, while significantly less attention has been paid to the computing challenges associated with the vast amounts of data that these technologies are generating." To maximize the return on investments in global neuroscience initiatives, Bouchard and his colleagues argue that the international neuroscience community should have an integrated strategy for data management and analysis. This coordination would facilitate the reproducibility of workflows, which then allows researchers to build on each other's work. For a first step, the authors recommend that researchers from all facets of neuroscience agree on standard descriptions and file formats for products derived from data analysis and simulations. After that, the researchers should work with computer scientists to develop hardware and software ecosystems for archiving and sharing data. The authors suggest an ecosystem similar to the one used by the physics community to share data collected by experiments like the Large Hadron Collider (LHC). In this case, each research group has their own local repository of physiological or simulation data that they've collected or generated. But eventually, all of this information should also be included in "meta-repositories" that are accessible to the greater neuroscience community. Files in the "meta-repositories" should be in a common format, and the repositories would ideally be hosted by an open-science supercomputing facility like the Department of Energy's (DOE's) National Energy Research Scientific Computing Center (NERSC), located at Berkeley Lab. Because novel technologies are producing unprecedented amounts of data, Bouchard and his colleagues also propose that neuroscientists collaborate with mathematicians to develop new approaches for data analysis and modify existing analysis tools to run on supercomputers. To maximize these collaborations, the analysis tools should be open-source and should integrate with brain-scale simulations, they emphasize. "These are the early days for neuroscience and big data, but we can see the challenges coming. This is not the first research community to face big data challenges; climate and high energy physics have been there and overcome many of the same issues," says Prabhat, who leads NERSC's Data & Analytics Services Group. Berkeley Lab is well positioned to help neuroscientists address these challenges because of its long tradition of interdisciplinary science, Prabhat adds. DOE facilities like NERSC and the Energy Sciences Network (ESnet) have worked closely with Lab computer scientists to help a range of science communities--from astronomy to battery research--collaborate and manage and archive their data. Berkeley Lab mathematicians have also helped researchers in various scientific disciplines develop new tools and methods for data analysis on supercomputers. "Harnessing the power of HPC resources will require neuroscientists to work closely with computer scientists and will take time, so we recommend rapid and sustained investment in this endeavor now," says Bouchard. "The insights generated from this effort will have high-payoff outcomes. They will support neuroscience efforts to reveal both the universal design features of a species' brain and help us understand what makes each individual unique." In addition to Bouchard and Prabhat, other authors on the paper include: James Aimone (Sandia National Laboratories); Miyoung Chun (Kavli Insitute for Fundamental Neuroscience); Thomas Dean (Google Research); Michael Denker (Jülich Research Center); Markus Diesmann (Jülich Research Center/ Aachen University); Loren Frank (Kavli Insitute for Fundamental Neuroscience/ Howard Hughs Medical Institute/ UC San Francisco); Narayanan Kasthuri (University of Chicago/Argonne National Laboratory); Christof Koch (Allen Institute for Brain Science); Friedrich Sommer (UC Berkeley); Horst Simon, David Donofrio and Oliver Rübel (Berkeley Lab). Lawrence Berkeley National Laboratory addresses the world's most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab's scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy's Office of Science. For more, visit http://www. . DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov.
News Article | March 4, 2016
SAN FRANCISCO — Are humans living in a simulation? Is consciousness nothing more than the firing of neurons in the brain? Or is consciousness a distinct entity that permeates every speck of matter in the universe? Several experts grappled with those topics at a salon at the Victorian home of Susan MacTavish Best, a lifestyle guru who runs Living MacTavish, here on Feb. 16. The event was organized by "Closer to Truth," a public television series and online resource that features the world's leading thinkers exploring humanity's deepest questions. The answer to the question "what is consciousness" could have implications for the future of artificial intelligence (AI) and far-out concepts like mind uploading and virtual immortality, said Robert Lawrence Kuhn, the creator, writer and host of "Closer to Truth." [Superintelligent Machines: 7 Robotic Futures] Philosophers have put forward many notions of consciousness. The materialist notion holds that consciousness can be fully explained by the the firing of neurons in the human brain, while mind-body dualism argues that the soul or mind is distinct from, and can potentially outlive, the body. Under the notion of panpsychism, a kind of re-boot of ancient animistic ideas, every speck of matter has a kind of proto-consciousness. When aggregated in particular ways, all this proto-consciousness turns into a sense of inner awareness. And other, Eastern philosophies have held that consciousness is the only real thing in the universe, Kuhn said. Neuroscientists and many philosophers have typically planted themselves firmly on the materialist side. But a growing number of scientists now believe that materialism cannot wholly explain the sense of "I am" that undergirds consciousness, Kuhn told the audience. One of those scientists is Christof Koch, the president and chief scientific officer of the Allen Institute for Brain Science in Seattle. At the event, he described a relatively recent formulation of consciousness called the integrated information theory. The idea, put forward by University of Wisconsin-Madison neuroscientist and psychiatrist Giulio Tononi, argues that consciousness resides in an as-yet-unknown space in the universe. Integrated information theory measures consciousness by a metric, called phi, which essentially translates to how much power over itself a being or object has. "If a system has causal power upon itself, like the brain does, then it feels like something. If you have a lot of causal power upon yourself, then it feels like a lot to be you," Koch said. The new theory implies a radical disconnect between intelligence and consciousness, Koch said. AI, which may already be intelligent enough to beat the best human player of the Go board game, may nevertheless be basically subconscious because it is not able to act upon itself. [Artificial Intelligence: Friendly or Frightening?] One critic in the audience noted that there is currently no way to test this theory, and that integrated information theory fails some common-sense tests when trying to deduce what things are conscious. (A thermostat, for instance, may have some low-level consciousness by this metric.) But Koch said he was not troubled by this notion. Many objects people think of as conscious may not be, while some that are considered inanimate may in fact have much greater consciousness than previously thought, Koch said. If Koch and others are correct that strict materialism can't explain consciousness, it has implications for how sentient a computer might be: A supercomputer that re-creates the connectome, or all the myriad connections between neurons in the human brain, may be able to simulate all the behaviors of a human, but wouldn't be conscious. "You can simulate the mass of the black hole at the center of our universe, but space-time will never twist around the computer itself," Koch said. "The supercomputer can simulate the effect of consciousness, but it isn't consciousness. Such simulated consciousness may a kind of AI zombie, retaining all of the outward appearance of consciousness, but with no one home inside, Kuhn said. That implies that uploading one's mind to a computer in order to achieve virtual immortality may not work the way that many people anticipate, Kuhn added. [The Singularity, Virtual Immortality and the Trouble with Consciousness (Op-Ed )] To create truly conscious AI, researchers may need to develop technologies that can act upon themselves, perhaps more akin to neuromorphic computers, Koch said. (Such computers would operate without any pre-programmed code, instead somehow sensing and reacting to changes in their own physical states.) If humans do somehow succeed in creating superintelligent AI, how can they ensure the technology matures in a way that betters humanity, rather than leading to its demise? David Brin, a computer scientist and science fiction author, suggested that humans may need to look at their own lives to make sure AI doesn't make human existence worse, rather than better. For instance, humans have evolved a lengthy life span in part so that they can nurture children through their unprecedentedly long childhoods, Brin suggested. So perhaps the safest way to raise our AI children is to take a blank-slate "proto AI and put it in a helpless body, and then let it experience the world under guidance," Brin said. "If that's the method by which we get AI, then perhaps we'll get a soft landing, because we know how to do that." Follow Tia Ghose on Twitter and Google+. Follow Live Science @livescience, Facebook & Google+. Original article on Live Science. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
News Article | August 23, 2016
Brain scientists Eric Jonas and Konrad Kording had grown skeptical. They weren’t convinced that the sophisticated, big data experiments of neuroscience were actually accomplishing anything. So they devised a devilish experiment. Instead of studying the brain of a person, or a mouse, or even a lowly worm, the two used advanced neuroscience methods to scrutinize the inner workings of another information processor — a computer chip. The unorthodox experimental subject, the MOS 6502, is the same chip that dazzled early tech junkies and kids alike in the 1980s by powering Donkey Kong, Space Invaders and Pitfall, as well as the Apple I and II computers. Of course, these experiments were rigged. The scientists already knew everything about how the 6502 works. “The beauty of the microprocessor is that unlike anything in biology, we understand it on every level,” says Jonas, of the University of California, Berkeley. Using a simulation of MOS 6502, Jonas and Kording, of Northwestern University in Chicago, studied the behavior of electricity-moving transistors, along with aspects of the chip’s connections and its output, to reveal how it handles information. Since they already knew what the outcomes should be, they were actually testing the methods. By the end of their experiments, Jonas and Kording had discovered almost nothing. Their results — or lack thereof — hit a nerve among neuroscientists. When Jonas presented the work last year at a Kavli Foundation workshop held at MIT, the response from the crowd was split. “A bunch of people said, ‘That’s awesome. I had that idea 10 years ago and never got around to doing it,’ ” Jonas says. “And a bunch of people were like, ‘That’s bullshit. You’re taking the analogy way too far. You’re attacking a straw man.’ ” On May 26, Jonas and Kording shared their results with a wider audience by posting a manuscript on the website bioRxiv.org. Bottom line of their report: Some of the best tools used by neuro-scientists turned up plenty of data but failed to reveal anything meaningful about a relatively simple machine. The implications are profound — and discouraging. Current neuro-science methods might not be up for the job when it comes to truly understanding the brain. The paper “does a great job of articulating something that most thoughtful people believe but haven’t said out loud,” says neuroscientist Anthony Zador of Cold Spring Harbor Laboratory in New York. “Their point is that it’s not clear that the current methods would ever allow us to understand how the brain computes in [a] fundamental way,” he says. “And I don’t necessarily disagree.” Critics, however, contend that the analogy of the brain as a computer is flawed. Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, Calif., for instance, calls the comparison “provocative, but misleading.” The brain and the microprocessor are distinct in a huge number of ways. The brain can behave differently in different situations, a variability that adds an element of randomness to its machinations; computers aim to serve up the same response to the same situation every time. And compared with a microprocessor, the brain has an incredible amount of redundancy, with multiple circuits able to step in and compensate when others malfunction. In microprocessors, the software is distinct from the hardware — any number of programs can run on the same machine. “This is not the case in the brain, where the software is the hardware,” Sejnowski says. And this hardware changes from minute to minute. Unlike the microprocessor’s connections, brain circuits morph every time you learn something new. Synapses grow and connect nerve cells, storing new knowledge. Brains and microprocessors have very different origins, Sejnowski points out. The human brain has been sculpted over millions of years of evolution to be incredibly specialized, able to spot an angry face at a glance, for instance, or remember a childhood song for years. The 6502, which debuted in 1975, was designed by a small team of humans, who engineered the chip to their exact specifications. The methods for understanding one shouldn’t be expected to work for the other, Sejnowski says. Yet there are some undeniable similarities. Brains and microprocessors are both built from many small units: 86 billion neurons and 3,510 transistors, respectively. These units can be organized into specialized modules that allow both “organs” to flexibly move information around and hold memories. Those shared traits make the 6502 a legitimate and informative model organism, Jonas and Kording argue. In one experiment, they tested what would happen if they tried to break the 6502 bit by bit. Using a simulation to run their experiments, the researchers systematically knocked out every single transistor one at a time. They wanted to know which transistors were mission-critical to three important “behaviors”: Donkey Kong, Space Invaders and Pitfall. The effort was akin to what neuroscientists call “lesion studies,” which probe how the brain behaves when a certain area is damaged. The experiment netted 1,565 transistors that could be eliminated without any consequences to the games. But other transistors proved essential. Losing any one of 1,560 transistors made it impossible for the microprocessor to load any of the games. Those results are hard to parse into something meaningful. This type of experiment, just as those in human and animal brains, are informative in some ways. But they don’t constitute understanding, Jonas argues. The gulf between knowing that a particular broken transistor can stymie a game and actually understanding how that transistor helps compute is “incredibly vast,” he says. The transistor “lesion” experiment “gets at the core problem that we are struggling with in neuro-science,” Zador says. “Although we can attribute different brain functions to different brain areas, we don’t actually understand how the brain computes.” Other experiments reported in the study turned up red herrings — results that looked similar to potentially useful brain data, but were ultimately meaningless. Jonas and Kording looked at the average activity of groups of nearby transistors to assess patterns about how the microprocessor works. Neuro-scientists do something similar when they analyze electrical patterns of groups of neurons. In this task, the microprocessor delivered some good-looking data. Oscillations of activity rippled over the microprocessor in patterns that seemed similar to those of the brain. Unfortunately, those signals are irrelevant to how the computer chip actually operates. Data from other experiments revealed a few finds, including that the microprocessor contains a clock signal and that it switches between reading and writing memory. Yet these are not key insights into how the chip actually handles information, Jonas and Kording write in their paper. It’s not that analogous experiments on the brain are useless, Jonas says. But he hopes that these examples reveal how big of a challenge it will be to move from experimental results to a true understanding. “We really need to be honest about what we’re going to pull out here.” Jonas says the results should caution against collecting big datasets in the absence of theories that can help guide experiments and that can be verified or refuted. For the microprocessor, the researchers had a lot of data, yet still couldn’t separate the informative wheat from the distracting chaff. The results “suggest that we need to try and push a little bit more toward testable theories,” he says. That’s not to say that big datasets are useless, he is quick to point out. Zador agrees. Some giant collections of neural information will probably turn out to be wastes of time. But “the right dataset will be useful,” he says. And the right bit of data might hold the key that propels neuroscientists forward. Despite the pessimistic overtones in the paper, Christof Koch of the Allen Institute for Brain Science in Seattle is a fan. “You got to love it,” Koch says. At its heart, the experiment on the 6502 “sends a good message of humility,” he adds. “It will take a lot of hard work by a lot of very clever people for many years to understand the brain.” But he says that tenacity, especially in the face of such a formidable challenge, will eventually lead to clarity. Zador recently opened a fortune cookie that read, “If the brain were so simple that we could understand it, we would be so simple that we couldn’t.” That quote, from IBM researcher Emerson Pugh, throws down the challenge, Zador says. “The alternative is that we will never understand it,” he says. “I just can’t believe that.” This story appears in the September 3, 2016, issue of Science News with the headline, "Gaming the brain: Computer experiment throws hsade on neuroscience methods."
Mudrik L.,California Institute of Technology |
Faivre N.,California Institute of Technology |
Koch C.,California Institute of Technology |
Koch C.,Allen Institute for Brain Science
Trends in Cognitive Sciences | Year: 2014
Information integration and consciousness are closely related, if not interdependent. But, what exactly is the nature of their relation? Which forms of integration require consciousness? Here, we examine the recent experimental literature with respect to perceptual and cognitive integration of spatiotemporal, multisensory, semantic, and novel information. We suggest that, whereas some integrative processes can occur without awareness, their scope is limited to smaller integration windows, to simpler associations, or to ones that were previously acquired consciously. This challenges previous claims that consciousness of some content is necessary for its integration; yet it also suggests that consciousness holds an enabling role in establishing integrative mechanisms that can later operate unconsciously, and in allowing wider-range integration, over bigger semantic, spatiotemporal, and sensory integration windows. © 2014 Elsevier Ltd.
Strange B.A.,Center for Biomedical Technology |
Strange B.A.,Alzheimerfs Disease Research Center |
Witter M.P.,Norwegian University of Science and Technology |
Lein E.S.,Allen Institute for Brain Science |
Moser E.I.,Norwegian University of Science and Technology
Nature Reviews Neuroscience | Year: 2014
The precise functional role of the hippocampus remains a topic of much debate. The dominant view is that the dorsal (or posterior) hippocampus is implicated in memory and spatial navigation and the ventral (or anterior) hippocampus mediates anxiety-related behaviours. However, this 'dichotomy view' may need revision. Gene expression studies demonstrate multiple functional domains along the hippocampal long axis, which often exhibit sharply demarcated borders. By contrast, anatomical studies and electrophysiological recordings in rodents suggest that the long axis is organized along a gradient. Together, these observations suggest a model in which functional long-axis gradients are superimposed on discrete functional domains. This model provides a potential framework to explain and test the multiple functions ascribed to the hippocampus. © 2014 Macmillan Publishers Limited. All rights reserved.
Goyal M.S.,University of Washington |
Hawrylycz M.,Allen Institute for Brain Science |
Miller J.A.,Allen Institute for Brain Science |
Snyder A.Z.,University of Washington |
Raichle M.E.,University of Washington
Cell Metabolism | Year: 2014
Aerobic glycolysis (AG; i.e., nonoxidative metabolism of glucose despite the presence of abundant oxygen) accounts for 10%-12% of glucose used by the adult human brain. AG varies regionally in the resting state. Brain AG may support synaptic growth and remodeling; however, data supporting this hypothesis are sparse. Here, we report on investigations on the role of AG in the human brain. Meta-analysis of prior brain glucose and oxygen metabolism studies demonstrates that AG increases during childhood, precisely when synaptic growth rates are highest. In resting adult humans, AG correlates with the persistence of gene expression typical of infancy (transcriptional neoteny). In brain regions with the highest AG, we find increased gene expression related to synapse formation and growth. In contrast, regions high in oxidative glucose metabolism express genes related to mitochondria and synaptic transmission. Our results suggest that brain AG supports developmental processes, particularly those required for synapse formation and growth. © 2014 Elsevier Inc.
Anastassiou C.A.,Allen Institute for Brain Science |
Koch C.,Allen Institute for Brain Science
Current Opinion in Neurobiology | Year: 2015
There has been a revived interest in the impact of electric fields on neurons and networks. Here, we discuss recent advances in our understanding of how endogenous and externally imposed electric fields impact brain function at different spatial (from synapses to single neurons and neural networks) and temporal scales (from milliseconds to seconds). How such ephaptic effects are mediated and manifested in the brain remains a mystery. We argue that it is both possible (based on available technologies) and worthwhile to vigorously pursue such research as it has significant implications on our understanding of brain processing and for translational neuroscience. © 2014 Elsevier Ltd.
Hunnicutt B.J.,Allen Institute for Brain Science
Nature neuroscience | Year: 2014
The thalamus relays sensori-motor information to the cortex and is an integral part of cortical executive functions. The precise distribution of thalamic projections to the cortex is poorly characterized, particularly in mouse. We employed a systematic, high-throughput viral approach to visualize thalamocortical axons with high sensitivity. We then developed algorithms to directly compare injection and projection information across animals. By tiling the mouse thalamus with 254 overlapping injections, we constructed a comprehensive map of thalamocortical projections. We determined the projection origins of specific cortical subregions and verified that the characterized projections formed functional synapses using optogenetic approaches. As an important application, we determined the optimal stereotaxic coordinates for targeting specific cortical subregions and expanded these analyses to localize cortical layer-preferential projections. This data set will serve as a foundation for functional investigations of thalamocortical circuits. Our approach and algorithms also provide an example for analyzing the projection patterns of other brain regions.
Mizuseki K.,New York University |
Mizuseki K.,Allen Institute for Brain Science |
Buzsaki G.,New York University
Cell Reports | Year: 2013
Despite the importance of the discharge frequency in neuronal communication, little is known about the firing-rate patterns of cortical populations. Using large-scale recordings from multiple layers of the entorhinal-hippocampal loop, we found that the firing rates of principal neurons showed a lognormal-like distribution in all brain states. Mean and peak rates within place fields of hippocampal neurons were also strongly skewed. Importantly, firing rates of the same neurons showed reliable correlations in different brain states and testing situations, as well as across familiar and novel environments. The fraction of neurons that participated in population oscillations displayed a lognormal pattern. Such skewed firing rates of individual neurons may be due to a skewed distribution of synaptic weights, which is supported by our observation of a lognormal distribution of the efficacy of spike transfer from principal neurons to interneurons. The persistent skewed distribution of firing rates implies that a preconfigured, highly active minority dominates information transmission in cortical networks