MIT Media Laboratory
MIT Media Laboratory
News Article | October 23, 2015
How is the mind formed, and what does it mean to be human? These are the questions that intrigue Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. To answer them, we will need a much deeper understanding of how the brain works, according to Boyden, who leads the synthetic neurobiology research group at the MIT Media Laboratory and the McGovern Institute for Brain Research at MIT. “If we understand how the circuitry of the brain computes things like thoughts and emotions, it could help us to know more about what it means to be human,” Boyden says. To this end, he and his group are developing techniques to allow neuroscience researchers to study the brain, and how it operates, at a more fundamental level of detail. “Perhaps someday in the future, if we understand how the brain works, we would understand the nature of irrationality and strife, and other aspects of the human condition,” Boyden says. “It could help humanity gain for the better.” Boyden has pioneered the development of technologies such as optogenetic tools, in which light-sensitive proteins from algae and bacteria are added to neurons. This allows the neurons to be activated or silenced with pulses of visible light. The tools are now widely used among researchers attempting to understand which specific kinds of neuron in the brain are responsible for different aspects of behavior and emotion, such as aggression. Last year his team announced they had uncovered optogenetic molecules that allow neurons to be controlled by red light. “Red light is very useful because it can go very deep into the brain or body, so it can activate or silence neurons distributed throughout large regions,” Boyden says. Working with researchers at the University of Vienna, his group has also recently developed a system that can generate three-dimensional movies of the brains of small animals at very high speeds. The system is based on a technology known as light-field imaging. This creates 3-D images by measuring the angles of light rays emitted by a sample. It can be used to image the activity of every neuron in the brain at once, giving researchers a much clearer picture of how different neurons work together to process and act on information. To help researchers take a closer look at the individual molecules, wirings, and connections in the brain, Boyden this year unveiled a new imaging technique based on the polymers used in disposable diapers. When a section of brain tissue is embedded in this dense polymer and water is added, the material swells, moving the biomolecules away from each other. This technology, called “expansion microscopy,” allows brain tissue to be studied in very fine detail using conventional microscopes, which are widely available and can operate at very high speeds, Boyden says. Expanding the brain in this way also gives researchers the space to attach tags to different biomolecules. These tags could then be used to “read out” the chemical composition of the brain, he says — much as barcodes are added to packaging to mark and identify individual products. But it is perhaps when all of these techniques are combined that they could offer the greatest insights into how the brain works. They may ultimately help us find new methods to treat the 1 billion people around the world who suffer from brain disorders such as epilepsy, Parkinson’s disease, and schizophrenia, Boyden says. “If we can really map the brain with molecular precision, and then probe neurons in the map with optical readout and control tools,” he says, “we could hunt down the control knobs in the brain that will allow us to fix these brain disorders.” Boyden graduated from MIT in 1999 with dual bachelor’s degrees in physics and electrical engineering and computer science, as well as a master’s degree in electrical engineering and computer science. Eager to tackle one of research’s remaining “big unknowns” he developed an interest in the brain, and moved to Stanford University to undertake a PhD in neuroscience. While at Stanford, he worked in the laboratories of Richard Tsien and Jennifer Raymond, studying motor learning, and also began working on his optogenetics research as a side project. In 2006 he rejoined MIT, establishing the neuroengineering group at the Media Lab to combine his understanding of neuroscience with his background in engineering. Unlike some university departments and institutions Boyden approached, which did not believe the field of neuroengineering was ready to become a discipline, the MIT Media Lab was willing to take a chance on a “misfit,” he says. “I wanted to be at MIT because I love the open, free-spirited, collaborative place that it is. Also, they were open-minded enough to hire me,” he adds wryly. But the field has gradually gained acceptance, as the neuroscience community has begun to appreciate how useful the new tools could be to their research. Then in 2013, President Barack Obama launched the BRAIN Initiative, a national research effort designed to improve our understanding of the mind and diseases such as Alzheimer’s. “I was one of four people from MIT who were invited to the White House for the announcement that neurotechnology should be a national priority, and the field has really exploded since then,” Boyden says. “So it is really gratifying for the Media Lab to bet on such a crazy idea, and to see it succeed as it has.”
Batrinca L.M.,University of Trento |
Mana N.,FBK irst |
Lepri B.,MIT Media Laboratory |
Pianesi F.,FBK irst |
Sebe N.,University of Trento
ICMI'11 - Proceedings of the 2011 ACM International Conference on Multimodal Interaction | Year: 2011
Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits. © 2011 ACM.
Batrinca L.M.,University of Trento |
Lepri B.,MIT Media Laboratory |
Mana N.,FBK |
ICMI'12 - Proceedings of the ACM International Conference on Multimodal Interaction | Year: 2012
The user's personality in Human-Computer Interaction (HCI) plays an important role for the overall success of the interaction. The present study focuses on automatically recognizing the Big Five personality traits from 2-5 min long videos, in which the computer interacts using different levels of collaboration, in order to elicit the manifestation of these personality traits. Emotional Stability and Extraversion are the easiest traits to automatically detect under the different collaborative settings: all the settings for Emotional Stability and intermediate and fully-non collaborative settings for Extraversion. Interestingly, Agreeableness and Conscientiousness can be detected only under a moderately noncollaborative setting. Finally, our task does not seem to activate the full range of dispositions for Creativity. Copyright 2012 ACM.
News Article | September 14, 2016
In the mid-1960s, when few people had even seen a computer, Seymour Papert was making it possible for children to use and program them. He spent his career inventing the tools, toys, software and projects that popularized the view of computers as incubators of knowledge. Papert wrote three seminal books on using the computer to supercharge learning, aimed at academics, teachers and parents — Mindstorms: Computers, Children, and Powerful Ideas (1980), The Children's Machine: Rethinking School in the Age of the Computer (1993) and The Connected Family: Bridging the Digital Generation Gap (1996). Few academics of Papert's stature have spent as much time as he did working in real schools. He delighted in the theories, ingenuity and playfulness of children. Tinkering or programming with them was the cause of many missed meetings. Papert, who died on 31 July, was born on leap day 1928 in Pretoria, South Africa. His father was an entomologist. Before he was two, he became enamoured by automotive gears that were lying around his home, which became the basis for early maths and science experiences. As an educator, he sought to help each learner to find his or her 'gears': objects or experiences they could mess about with, intuiting powerful ideas along the way. Papert believed that what gears could not do for all, the computer, the Proteus of machines, might. Papert was repelled by apartheid. He ran afoul of the authorities by organizing classes for local black servants while in school. His anti-apartheid activities as a young adult branded him a dissident and prohibited him from travelling outside South Africa. He earned a bachelor's degree in philosophy (1949) and a PhD (1952) in mathematics at the University of the Witwatersrand in Johannesburg. Without a passport, in 1954 he made his way to the University of Cambridge, UK, where he earned a second doctorate, in 1959, for work on the lattices of logic and topology. From 1959 to 1963, Papert worked at the University of Geneva with the Swiss philosopher and psychologist Jean Piaget. Their collaboration led to great insights into how children learn to think mathematically. Papert built on Piaget's theory of constructivism with a learning theory of his own: constructionism. It proposed that the best way to ensure that knowledge is built in the learner is through the active construction of something shareable — a poem, program, model or idea. In 1963, artificial-intelligence (AI) pioneer Marvin Minsky invited Papert to join him at the Massachusetts Institute of Technology (MIT) in Cambridge. Papert was soon promoted to co-direct Minsky's Artificial Intelligence Laboratory. The pair co-authored the 1969 book Perceptrons; their mathematical analyses of how neuron-like networks comprised of individual agents could model the brain had a great impact on AI research. In 1985, Papert became a founding faculty member of the MIT Media Laboratory, where he led research groups on epistemology and learning and the future of learning. Thinking about thinking and the freedom to achieve one's potential were the leitmotifs of his life. He wanted to create “a mathematics children can love rather than inventing tricks to teach them a mathematics they hate”. In the late 1960s, Papert was among the creators of Logo, the first programming language for children. One element that made Logo accessible was the turtle, which acted as the programmer's avatar. As mathematical instructions were given to the turtle to move about in space, the creature dragged a pen to draw a trail. Such drawings created turtle geometry, a context in which linear measurement, arithmetic, integers, angle measure, motion and foundational concepts from algebra, geometry and even calculus were made concrete and understandable. Mathematics became playful, personal, expressive, relevant and purposeful. In 1968, Alan Kay, now known as the designer of what became the Macintosh graphical user interface, was so impressed by the mathematics that he saw children spontaneously engaged in at Papert's Logo lab at MIT that on his flight home, he sketched the Dynabook, the prototype for what became the personal computer. In 1989, Australian schools seeking to realize Papert's ideas began providing a laptop to every student. In 2000, Maine governor Angus King proposed providing a laptop for every 7th and 8th grader (typically 12–14-year-olds). Papert spent two years making the case across the state, causing popular opinion to override legislative resistance. The programme remains in place today. Papert was also an inspiration behind the One Laptop per Child initiative that has reached millions of children in the developing world. A 1971 paper co-authored by Papert, 'Twenty Things to Do with a Computer' (see go.nature.com/2buuwe), marks the birth of the modern 'maker movement'. It describes a world in which children would create by programming inventions and experiments outside of a PC. In the mid-1980s, Papert and his colleagues made that world a reality with the first programmable robotics system for children, LEGO TC Logo. The name of LEGO's current line of robotics sets — Mindstorms — is a hat-tip to Papert. In 1989, LEGO endowed a permanent chair at the MIT Media Lab in his name. Although critical of institutional schooling, Papert's research took place in schools, often with under-served populations of students. In 1986, he was invited to help Costa Rica reinvent its educational system, and from 1999 to 2002, Papert led an alternative, high-tech, project-based learning environment inside a prison for teenagers. Papert dared educators to grow, invent and lead in a system prone to compliance and standardization. He argued that education is a natural process that blossoms in the absence of coercion. In Papert's eyes, the computer was an object to think with. He built a bridge between progressive educational traditions and the Internet age to maintain the viability of schooling, and to ensure the democratization of powerful ideas.
Ferscha A.,Johannes Kepler University |
Paradiso J.,MIT Media Laboratory |
Whitaker R.,University of Cardiff
IEEE Pervasive Computing | Year: 2014
As new technologies continuously wash floods of information over individuals via both personal devices and public ICT systems, it's becoming increasingly difficult for individuals to allocate their attention to the right things at the right times. Given this overabundance of information, attention management is of great interest to the pervasive computing community. The articles in this issue focus on understanding how attention is allocated and how information is perceived and shared so it can lead to informed decisions and behavioral change. © 2002-2012 IEEE.
News Article | March 14, 2016
Goren Gordon, an artificial intelligence researcher from Tel Aviv University who runs the Curiosity Lab there, is no different. He and his wife spend as much time as they can with their children, but there are still times when their kids are alone or unsupervised. At those times, they'd like their children to have a companion to learn and play with, Gordon says. That's the case, even if that companion is a robot. Working in the Personal Robots Group at MIT, led by Cynthia Breazeal, Gordon was part of a team that developed a socially assistive robot called Tega that is designed to serve as a one-on-one peer learner in or outside of the classroom. Socially assistive robots for education aren't new, but what makes Tega unique is the fact that it can interpret the emotional response of the student it is working with and, based on those cues, create a personalized motivational strategy. Testing the setup in a preschool classroom, the researchers showed that the system can learn and improve itself in response to the unique characteristics of the students it worked with. It proved to be more effective at increasing students' positive attitude towards the robot and activity than a non-personalized robot assistant. The team reported its results at the 30th Association for the Advancement of Artificial Intelligence (AAAI) Conference in Phoenix, Arizona, in February. Tega is the latest in a line of smartphone-based, socially assistive robots developed in the MIT Media Lab. The work is supported by a five-year, $10 million Expeditions in Computing award from the National Science Foundation (NSF), which support long-term, multi-institutional research in areas with the potential for disruptive impact. The researchers piloted the system with 38 students aged three to five in a Boston-area school last year. Each student worked individually with Tega for 15 minutes per session over the course of eight weeks. A furry, brightly colored robot, Tega was developed specifically to enable long-term interactions with children. It uses an Android device to process movement, perception and thinking and can respond appropriately to children's behaviors. Unlike previous iterations, Tega is equipped with a second Android phone containing custom software developed by Affectiva Inc.—an NSF-supported spin-off of Rosalind Picard of MIT—that can interpret the emotional content of facial expressions, a method known as "affective computing." The students in the trial learned Spanish vocabulary from a tablet computer loaded with a custom-made learning game. Tega served not as a teacher but as a peer learner, encouraging students, providing hints when necessary and even sharing in students' annoyance or boredom when appropriate. The system began by mirroring the emotional response of students – getting excited when they were excited, and distracted when the students lost focus – which educational theory suggests is a successful approach. However, it went further and tracked the impact of each of these cues on the student. Over time, it learned how the cues influenced a student's engagement, happiness and learning successes. As the sessions continued, it ceased to simply mirror the child's mood and began to personalize its responses in a way that would optimize each student's experience and achievement. "We started with a very high-quality approach, and what is amazing is that we were able to show that we could do even better," Gordon says. Over the eight weeks, the personalization continued to increase. Compared with a control group that received only the mirroring reaction, students with the personalized response were more engaged by the activity, the researchers found. In addition to tracking long-term impacts of the personalization, they also studied immediate changes that a response from Tega elicited from the student. From these before-and-after responses, they learned that some reactions, like a yawn or a sad face, had the effect of lowering the engagement or happiness of the student—something they had suspected but that had never been studied. "We know that learning from peers is an important way that children learn not only skills and knowledge, but also attitudes and approaches to learning such as curiosity and resilience to challenge," says Breazeal, associate professor of Media Arts and director of the Personal Robots Group at the MIT Media Laboratory. "What is so fascinating is that children appear to interact with Tega as a peer-like companion in a way that opens up new opportunities to develop next-generation learning technologies that not only address the cognitive aspects of learning, like learning vocabulary, but the social and affective aspects of learning as well." The experiment served as a proof of concept for the idea of personalized educational assistive robots and also for the feasibility of using such robots in a real classroom. The system, which is almost entirely wireless and easy to set up and operate behind a divider in an active classroom, caused very little disruption and was thoroughly embraced by the student participants and by teachers. "It was amazing to see," Gordon reports. "After a while the students started hugging it, touching it, making the expression it was making and playing independently with almost no intervention or encouragement." Though the duration of the experiment was comprehensive, the study showed the personalization process continued to progress even through the eight weeks, suggesting more time would be needed to arrive at an optimal interaction style. The researchers plan to improve upon and test the system in a variety of settings, including with students with learning disabilities, for whom one-on-one interaction and assistance is particularly critical and hard to come by. "A child who is more curious is able to persevere through frustration, can learn with others and will be a more successful lifelong learner," Breazeal says. "The development of next-generation learning technologies that can support the cognitive, social and emotive aspects of learning in a highly personalized way is thrilling." Explore further: Robotic dragon, an unlikely teacher
Bletsas A.,Technical University of Crete |
Lippman A.,MIT Media Laboratory |
IEEE Journal on Selected Areas in Communications | Year: 2010
This work studies zero-feedback distributed beamforming; we are motivated by scenarios where the links between destination and all distributed transmitters are weak, so that no reliable communication in the form of pilot signals or feedback messages can be assumed. Furthermore, we make the problem even more challenging by assuming no specialized software/hardware for distributed carrier synchronization; we are motivated by ultra-low complexity transceivers. It is found that zero-feedback (i.e. blind), constructive, distributed signal alignment at the destination is possible; the proposed scheme exploits lack of carrier synchronization among M distributed transmitters and provides beamforming gains. Possible applications include reachback communication in low-cost sensor networks with simple (i.e. conventional, no carrier frequency/phase adjustment capability) radio transceivers. © 2010 IEEE.
Mukaigawa Y.,Osaka University |
Mukaigawa Y.,MIT Media Laboratory |
Yagi Y.,Osaka University |
Raskar R.,MIT Media Laboratory
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition | Year: 2010
We propose a new method to analyze light transport in homogeneous scattering media. The incident light undergoes multiple bounces in translucent objects, and produces a complex light field. Our method analyzes the light transport in two steps. First, single and multiple scattering are separated by projecting high-frequency stripe patterns. Then, multiple scattering is decomposed into each bounce component based on the light transport equation. The light field for each bounce is recursively estimated. Experimental results show that light transport in scattering media can be decomposed and visualized for each bounce. ©2010 IEEE.
Mistry P.,MIT Media Laboratory |
Maes P.,MIT Media Laboratory
UIST 2010 - 23rd ACM Symposium on User Interface Software and Technology, Adjunct Proceedings | Year: 2010
In this short paper we present Mouseless - a novel input device that provides the familiarity of interaction of a physical computer mouse without actually requiring a real hardware mouse. The paper also briefly describes hardware and software implementation of the prototype system and discusses interactions supported.
Tseng T.,MIT Media Laboratory
Proceedings of IDC 2015: The 14th International Conference on Interaction Design and Children | Year: 2015
While project documentation can help young Makers showcase their learning, prototyping skills, and creativity, motivating documentation practices has remained a challenge. Current tools for photographing projects are often disruptive to the flow of creating a design project, and compiling documentation into a readable and sharable format can be time consuming. To address these issues, I introduce Spin, a photography turntable system for creating animated documentation. Spin consists of a motorized turntable that pairs with a mobile device to capture 360-degree views of a DIY project at a particular point in time. These photographs are compiled into an animation of the project called a spin. As a project is developed over time, spin animations are compiled into a set animation showcasing the evolution of the project. This paper describes the motivation for creating the Spin system, its current implementation, and a planned pilot study involving introducing the system to several Makerspaces across the United States for extended use. Copyright is held by the owner/author(s).