Time filter

Source Type

News Article | May 31, 2017
Site: www.sciencedaily.com

Scanning your brain to decode the contents of your mind has been a subject of intense research interest for some time. As studies have progressed, scientists have gradually been able to interpret what test subjects see, remember, imagine, and even dream. There have been significant limitations, however, beginning with a necessity to extensively catalog each subject's unique brain patterns, which are then matched with a small number of pre-programmed images. These procedures require that subjects undergo lengthy and expensive fMRI testing. Now a team of researchers in Kyoto has used neural network-based artificial intelligence to decode and predict what a person is seeing or imagining, referring to a significantly larger catalog of images. Their results are reported in Nature Communications. "When we gaze at an object, our brains process these patterns hierarchically, starting with the simplest and progressing to more complex features," explains team leader Yukiyasu Kamitani of Kyoto University. "The AI we used works on the same principle. Named 'Deep Neural Network', or DNN, it was trained by a group now at Google." The team from Kyoto University and ATR (Advanced Telecommunications Research) Computational Neuroscience Laboratories discovered that brain activity patters can be decoded, or translated, into signal patterns of simulated neurons in the DNN when both are shown the same image. Additionally, the researchers found that lower and higher visual areas in the brain were better at decoding respective layers of the DNN, revealing a homology between the human brain and the neural network. "We tested whether a DNN signal pattern decoded from brain activity can be used to identify seen or imagined objects from arbitrary categories," explains Kamitani. "The decoder takes neural network patterns and compares these with image data from a large database. Sure enough, the decoder could identify target objects with high probability." As brain decoding and AI development advance, Kamitani hopes to improve the image identification accuracy of their technique. He concludes, "Bringing AI research and brain science closer together could open the door to new brain-machine interfaces, perhaps even bringing us closer to understanding consciousness itself."


News Article | May 31, 2017
Site: www.eurekalert.org

Kyoto, Japan -- Scanning your brain to decode the contents of your mind has been a subject of intense research interest for some time. As studies have progressed, scientists have gradually been able to interpret what test subjects see, remember, imagine, and even dream. There have been significant limitations, however, beginning with a necessity to extensively catalog each subject's unique brain patterns, which are then matched with a small number of pre-programmed images. These procedures require that subjects undergo lengthy and expensive fMRI testing. Now a team of researchers in Kyoto has used neural network-based artificial intelligence to decode and predict what a person is seeing or imagining, referring to a significantly larger catalog of images. Their results are reported in Nature Communications. "When we gaze at an object, our brains process these patterns hierarchically, starting with the simplest and progressing to more complex features," explains team leader Yukiyasu Kamitani of Kyoto University. "The AI we used works on the same principle. Named 'Deep Neural Network', or DNN, it was trained by a group now at Google." The team from Kyoto University and ATR (Advanced Telecommunications Research) Computational Neuroscience Laboratories discovered that brain activity patterns can be decoded, or translated, into signal patterns of simulated neurons in the DNN when both are shown the same image. Additionally, the researchers found that lower and higher visual areas in the brain were better at decoding respective layers of the DNN, revealing a homology between the human brain and the neural network. "We tested whether a DNN signal pattern decoded from brain activity can be used to identify seen or imagined objects from arbitrary categories," explains Kamitani. "The decoder takes neural network patterns and compares these with image data from a large database. Sure enough, the decoder could identify target objects with high probability." As brain decoding and AI development advance, Kamitani hopes to improve the image identification accuracy of their technique. He concludes, "Bringing AI research and brain science closer together could open the door to new brain-machine interfaces, perhaps even bringing us closer to understanding consciousness itself." The paper "Generic decoding of seen and imagined objects using hierarchical visual features" appeared 22 May 2017 in Nature Communications, with doi: 10.1038/ncomms15037 Kyoto University is one of Japan and Asia's premier research institutions, founded in 1897 and responsible for producing numerous Nobel laureates and winners of other prestigious international prizes. A broad curriculum across the arts and sciences at both undergraduate and graduate levels is complemented by numerous research centers, as well as facilities and offices around Japan and the world. For more information please see: http://www.


Tee K.P.,National University of Singapore | Tee K.P.,Institute for Infocomm Research | Franklin D.W.,University of Cambridge | Franklin D.W.,Japan National Institute of Information and Communications Technology | And 5 more authors.
Biological Cybernetics | Year: 2010

This article examines the validity of a model to explain how humans learn to perform movements in environments with novel dynamics, including unstable dynamics typical of tool use. In this model, a simple rule specifies how the activation of each muscle is adapted from one movement to the next. Simulations of multijoint arm movements with a neuromuscular plant that incorporates neural delays, reflexes, and signal-dependent noise, demonstrate that the controller is able to compensate for changing internal or environment dynamics and noise properties. The computational model adapts by learning both the appropriate forces and required limb impedance to compensate precisely for forces and instabilities in arbitrary directions with patterns similar to those observed in motor learning experiments. It learns to regulate reciprocal activation and co-activation in a redundant muscle system during repeated movements without requiring any explicit transformation from hand to muscle space. Independent error-driven change in the activation of each muscle results in a coordinated control of the redundant muscle system and in a behavior that reduces instability, systematic error, and energy. © 2009 Springer-Verlag.


News Article | December 19, 2016
Site: news.yahoo.com

There may be a way to tap into people's brain activity to boost their confidence, a new study suggests. In the study, the researchers used a technique called decoded neurofeedback, which involves scanning people's brains to monitor their brain activity, and using artificial intelligence to detect activity patterns that are linked with feelings of confidence. Then, whenever these patterns are detected, people are given a reward — in this case, participants were given a small amount of money. The researchers found that by doing this, they could boost participants' confidence when they were doing a task in a laboratory, regardless of how well they actually performed the task. What's more, the same technique could be used to decrease confidence, if people were rewarded when their brain activity showed a pattern that was linked to low confidence, according to the researchers. The finding "adds to the growing body of evidence on how confidence is generated in the brain," the researchers wrote in the Dec. 15 issue of the journal Nature Communications. [10 Things You Didn't Know About the Brain] The same group of researchers recently published a study suggesting that decoded neurofeedback could be used to erase frightening memories. In the new study, 17 participants had their brains scanned while they performed a perceptual task — they had to determine whether dots on a screen appeared to be moving to the left or right. Then they were asked to rate how confident they were in their choice. The researchers used their technique to "decode" brain activity patterns, to find patterns linked with the times that participants said that they were very confident in their choice. The researchers wanted to know, "How is confidence represented in the brain?" study researcher Mitsuo Kawato, director of the Computational Neuroscience Laboratories at ATR, a research institute in Kyoto, Japan, said in a statement. To find out, the researchers looked for specific patterns in the brain "that could reliably tell us when a participant was in a high or low confidence state," he said. And then they used this information "to make the occurrence of a confident state more likely to happen in the future," Kawato said. To do this, the researchers had participants undergo a "training" session. During the session, the participants lay in a brain scanner while staring at an image of a white disc. They were instructed to "regulate" their brain activity in order to make the disc image grow larger, but they weren't given any tips for how this might be accomplished. When asked later what they were thinking about in the scanner, participants said things like "I was counting," "I was focusing on the disc itself," or "I was thinking about food." Unbeknownst to the participants, the disc image grew bigger whenever their brain activity patterns corresponded to those that were observed when they were in their high-confidence states. After the training session, they were given a small, monetary reward that was based on how large they said the disc grew. [Top 10 Mysteries of the Mind] "By continuously pairing the occurrence of the highly confident state with a reward — a small amount of money — in real time," we were able to make participants more confident on the perceptual task, said study researcher Aurelio Cortese, also of ATR. The researchers saw these changes in confidence even though participants' level of accuracy the task didn't change. The researchers noted that their study was small, and that they aren't exactly sure how the technique may be working, on a psychological level, to boost people's confidence. But the researchers hope that the technique might one day be used to treat people with certain psychiatric conditions that are associated with changes in confidence. For example, a person's depression can be made worse if the individual thinks negatively about his or her capacities, the researchers said. Charan Ranganath, a professor for the Center for Neuroscience at the University of California, Davis, said that the findings were interesting, but they do not necessarily mean that the study participants would experience a confidence boost in other areas of their life. "There's a limit to what you can infer from this study," said Ranganath, who was not involved in the study. "You can train people to be more confident in making left-right decisions, but that doesn't generalize, necessarily, to any other decision in life," Ranganath said. So much more research would be needed to see whether the technique could help people in real-world situations, such as public speaking or social events, Ranganath said. Ranganath noted that even talk therapy treatments that appear to work in a clinic may not work in the real world, because of the way our brains are wired to learn — that is, we learn things in a way that is specific to a certain situation. "It's hard to work against that and get people to learn something that fundamentally generalizes [to other situations], because our brains don’t want to do that," Ranganath said. But the study does show that a person's confidence level is not necessarily related to anything objective. "It's kind of another piece of evidence to say, when people are confident, it doesn't necessarily mean they're right or wrong," Ranganath said. "It's important for people to understand that somebody who's confidently saying something doesn't necessarily know more than somebody who's not confidently saying it." 6 Foods That Are Good for Your Brain 5 Ways Your Emotions Influence Your World (and Vice Versa)


News Article | December 15, 2016
Site: www.chromatographytechniques.com

Is it possible to directly boost one's own confidence by training the brain? Researchers have discovered a way to implicitly amplify confidence in the brain by combining the use of artificial intelligence and brain imaging technology. The study, published in the journal Nature Communications, could lead to important applications in clinical, medical and social settings. Self-confidence is an essential quality to succeed in such milieus as in business environments, politics and many others. Furthermore, confidence is an important consideration in mental illnesses such as depression and Alzheimer's disease, in which the underlying condition is often complicated by patients thinking negatively of their own capacities. Recent advances in neuroscience have highlighted the plasticity of the brain, indicating it is malleable even later in life. The international team developed a state-of-the-art method to read and then amplify a high-confidence state using a new technique called decoded neurofeedback. This technique used brain scanning to monitor and detect the occurrence of specific complex patterns of activity corresponding to high confidence states while participants performed a simple perceptual task. In the training sessions, whenever the pattern of high confidence was detected, participants received a small monetary reward. This experiment allowed researchers to directly boost subject confidence unconsciously—participants were unaware that such manipulation took place. Importantly, the effect could be reversed, as confidence could also be decreased. "How is confidence represented in the brain? Although this is a very complex question, we used approaches drawn from artificial intelligence (AI) to find specific patterns in the brain that could reliably tell us when a participant was in a high- or low-confidence state. The core challenge was then to use this information in real time to make the occurrence of a confident state more likely to happen in the future," explained Mitsuo Kawato, director of the Computational Neuroscience Laboratories at ATR, Kyoto, and one of the authors of the study. "Surprisingly, by continuously pairing the occurrence of the highly confident state with a reward—a small amount of money—in real-time, we were able to do just that. When participants had to rate their confidence in the perceptual task at the end of the training, their were consistently more confident," said Aurelio Cortese, of the Advanced Telecommunications Research Institute International, Kyoto, who led the research. "Crucially, in this study, confidence was measured quantitatively via rigorous psychophysics, making sure the effects were not just a change of mood or simple reporting strategy. Such changes in confidence took place even though the participants performed the relevant task at the same performance level," added Hakwan Lau, associate professor in the UCLA Psychology Department, and senior author on the study. The sample size was relatively small (17 people), but is in line with basic science investigations of similar kinds. The team is currently working on the development of potential new clinical treatment for patients with various psychiatric conditions. The team, in a different study led by co-author Ai Koizumi, has indeed already discovered a new way to unconsciously erase fear memories, reprogramming the brain to overcome fear. The study was published in the inaugural edition of Nature Human Behavior on November 21st, and opens the potential for radical new treatments of conditions such as post-traumatic stress disorder (PTSD) and phobias.


News Article | December 15, 2016
Site: www.eurekalert.org

New breakthrough in neuroscience: Self confidence can be directly amplified in the brain Self-confidence is an essential quality to succeed in the world, such as in business environments, politics or many other aspects of our everyday life. Furthermore, confidence is an important aspect in mental illnesses such as depression and Alzheimer's disease, where the condition is often further complicated by patients thinking negatively of their own capacities. Recent advances in neuroscience have highlighted the plasticity of the brain, indicating it is malleable even later in life. The international team developed a state-of-the-art method to read and then amplify a high confidence state using a new technique called 'Decoded Neurofeedback'. This technique used brain scanning to monitor and detect the occurrence of specific complex patterns of activity corresponding to high confidence states, while participants performed a simple perceptual task. In the training sessions, whenever the pattern of high confidence was detected, participants received a small monetary reward. This experiment allowed researchers to directly boost one's own confidence unconsciously, i.e. participants were unaware that such manipulation took place. Importantly, the effect could be reversed, as confidence could also be decreased. Dr. Mitsuo Kawato, Director of the Computational Neuroscience Laboratories at ATR, Kyoto, and one of the authors on the study, has pioneered this state-of-the-art technology. He explained the process: "How is confidence represented in the brain? Although this is a very complex question, we used approaches drawn from artificial intelligence (AI) to find specific patterns in the brain that could reliably tell us when a participant was in a high or low confidence state. The core challenge was then to use this information in real-time, to make the occurrence of a confident state more likely to happen in the future". Dr. Aurelio Cortese, of the Advanced Telecommunications Research Institute International, Kyoto, led the research: "Surprisingly, by continuously pairing the occurrence of the highly confident state with a reward - a small amount of money - in real-time, we were able to do just that: when participants had to rate their confidence in the perceptual task at the end of the training, their were consistently more confident". Dr. Hakwan Lau, Associate Professor in the UCLA Psychology Department, was the senior author on the study and an expert in confidence and metacognition: "Crucially, in this study confidence was measured quantitatively via rigorous psychophysics, making sure the effects were not just a change of mood or simple reporting strategy. Such changes in confidence took place even though the participants performed the relevant task at the same performance level". The sample size was relatively small (17 people), but is in line with basic science investigations of similar kinds. The team is currently working on the development of potential new clinical treatment for patients with various psychiatric conditions. NOTE: The team, in a different study led by co-author Dr. Ai Koizumi, has indeed already discovered a new way to unconsciously erase fear memories, reprogramming the brain to overcome fear. The study was recently published in the inaugural edition of Nature Human Behavior on November 21st, and opens the potential for radical new treatments of conditions such as post-traumatic-stress-disorder (PTSD) and phobias.


Callan D.E.,Computational Neuroscience Laboratories | Schweighofer N.,University of Southern California
Human Brain Mapping | Year: 2010

Spaced presentations of to-be-learned items during encoding leads to superior long-term retention over massed presentations. Despite over a century of research, the psychological and neural basis of this spacing effect however is still under investigation. To test the hypotheses that the spacing effect results either from reduction in encoding-related verbal maintenance rehearsal in massed relative to spaced presentations (deficient processing hypothesis) or from greater encoding-related elaborative rehearsal of relational information in spaced relative to massed presentations (encoding variability hypothesis), we designed a vocabulary learning experiment in which subjects encoded paired-associates, each composed of a known word paired with a novel word, in both spaced and massed conditions during functional magnetic resonance imaging. As expected, recall performance in delayed cued-recall tests was significantly better for spaced over massed conditions. Analysis of brain activity during encoding revealed that the left frontal operculum, known to be involved in encoding via verbal maintenance rehearsal, was associated with greater performance-related increased activity in the spaced relative to massed condition. Consistent with the deficient processing hypothesis, a significant decrease in activity with subsequent episodes of presentation was found in the frontal operculum for the massed but not the spaced condition. Our results suggest that the spacing effect is mediated by activity in the frontal operculum, presumably by encoding-related increased verbal maintenance rehearsal, which facilitates binding of phonological and word level verbal information for transfer into long-term memory. © 2009 Wiley-Liss, Inc.


Hosoya H.,Computational Neuroscience Laboratories | Hosoya H.,Japan Science and Technology Agency | Hyvarinen A.,Aalto University
Journal of Neuroscience | Year: 2015

Previous theoretical and experimental studies have demonstrated tight relationships between natural image statistics and neural representations in V1. In particular, receptive field properties similar to simple and complex cells have been shown to be inferable from sparse coding of natural images. However, whether such a relationship exists in higher areas has not been clarified. To address this question for V2, we trained a sparse coding model that took as input the output of a fixed V1-like model, which was in its turn fed a large variety of natural image patches as input. After the training, the model exhibited response properties that were qualitatively and quantitatively compatible with three major neurophysiological results on macaque V2, as follows: (1) homogeneous and heterogeneous integration of local orientations (Anzai et al., 2007); (2) a wide range of angle selectivities with biased sensitivities to one component orientation (Ito and Komatsu, 2004); and (3) exclusive length and width suppression (Schmid et al., 2014). The reproducibility was stable across variations in several model parameters. Further, a formal classification of the internal representations of the model units offered detailed interpretations of the experimental data, emphasizing that a novel type of model cell that could detect a combination of local orientations converging toward a single spatial point (potentially related to corner-like features) played an important role in reproducing tuning properties compatible with V2. These results are consistent with the idea that V2 uses a sparse code of natural images. © 2015, the authors.


Kadiallah A.,Imperial College London | Liaw G.,National Institute of Information and Communication Technology | Kawato M.,Computational Neuroscience Laboratories | Franklin D.W.,National Institute of Information and Communication Technology | And 3 more authors.
Journal of Neurophysiology | Year: 2011

Humans are able to learn tool-handling tasks, such as carving, demonstrating their competency to make movements in unstable environments with varied directions. When faced with a single direction of instability, humans learn to selectively co-contract their arm muscles tuning the mechanical stiffness of the limb end point to stabilize movements. This study examines, for the first time, subjects simultaneously adapting to two distinct directions of instability, a situation that may typically occur when using tools. Subjects learned to perform reaching movements in two directions, each of which had lateral instability requiring control of impedance. The subjects were able to adapt to these unstable interactions and switch between movements in the two directions; they did so by learning to selectively control the end-point stiffness counteracting the environmental instability without superfluous stiffness in other directions. This finding demonstrates that the central nervous system can simultaneously tune the mechanical impedance of the limbs to multiple movements by learning movement-specific solutions. Furthermore, it suggests that the impedance controller learns as a function of the state of the arm rather than a general strategy. © 2011 the American Physiological Society.


Kodl J.,Imperial College London | Ganesh G.,Japan National Institute of Information and Communications Technology | Ganesh G.,Computational Neuroscience Laboratories | Burdet E.,Imperial College London
PLoS ONE | Year: 2011

Traditionally motor studies have assumed that motor tasks are executed according to a single plan characterized by regular patterns, which corresponds to the minimum of a cost function in extrinsic or intrinsic coordinates. However, the novel via-point task examined in this paper shows distinct planning and execution stages in motion production and demonstrates that subjects randomly select from several available motor plans to perform a task. Examination of the effect of pre-training and via-point orientation on subject behavior reveals that the selection of a plan depends on previous movements and is affected by constraints both intrinsic and extrinsic of the body. These results provide new insights into the hierarchical structure of motion planning in humans, which can only be explained if the current models of motor control integrate an explicit plan selection stage. © 2011 Kodl et al.

Loading Computational Neuroscience Laboratories collaborators
Loading Computational Neuroscience Laboratories collaborators