Entity

Time filter

Source Type


To do this, computer systems are programmed to find predictive relationships calculated from the massive amounts of data we supply to them. Machine learning systems use advanced algorithms—a set of rules for solving math problems—to identify these predictive relationships using "training data." This data is then used to construct the models and features within a system that enables it to correctly predict your desire to read the latest best-seller, or the likelihood of rain next week. This intricate learning process means that a piece of raw data often goes through a series of computations in a given system. The data, computations and information derived by the system from that data together form a complex propagation network called the data's "lineage." The term was coined by researchers Yinzhi Cao of Lehigh University and Junfeng Yang of Columbia University who are pioneering a novel approach toward making such learning systems forget. Considering how important this concept is to increasing security and protecting privacy, Cao and Yang believe that easy adoption of forgetting systems will be increasingly in demand. The pair has developed a way to do it faster and more effectively than what is currently available. Their concept, called "machine unlearning," is so promising that the duo have been awarded a four-year, $1.2 million National Science Foundation grant—split between Lehigh and Columbia—to develop the approach. "Effective forgetting systems must be able to let users specify the data to forget with different levels of granularity," said Yinzhi Cao, Assistant Professor of Computer Science and Engineering at Lehigh University's P.C. Rossin College of Engineering & Applied Science and a Principal Investigator on the project. "These systems must remove the data and undo its effects so that all future operations run as if the data never existed." There are a number of reasons why an individual user or service provider might want a system to forget data and its complete lineage. Privacy is one. After Facebook changed its privacy policy, many users deleted their accounts and the associated data. The iCloud photo hacking incident in 2014—in which hundreds of celebrities' private photos were accessed via Apple's cloud services suite—led to online articles teaching users how to completely delete iOS photos including the backups. New research has revealed that machine learning models for personalized medicine dosing leak patients' genetic markers. Only a small set of statistics on genetics and diseases are enough for hackers to identify specific individuals, despite cloaking mechanism. Naturally, users unhappy with these newfound risks want their data and its influence on the models and statistics to be completely forgotten. Security is another reason. Consider anomaly-based intrusion detection systems used to detect malicious software. In order to positively identify an attack, the system must be taught to recognize normal system activity. Therefore the security of these systems hinges on the model of normal behaviors extracted from the training data. By polluting the training data, attackers pollute the model and compromise security. Once the polluted data is identified, the system must completely forget the data and its lineage in order to regain security. Widely-used learning systems such as Google Search are, for the most part, only able to forget a user's raw data upon request and not that data's lineage. While this is obviously problematic for users who wish to ensure that any trace of unwanted data is removed completely, this limitation is also a major challenge for service providers who have strong incentives to fulfill data removal requests, including the retention of customer trust. Service providers will increasingly need to be able to remove data and its lineage completely to comply with laws governing user data privacy, such as the "right to be forgotten" ruling issued in 2014 by the European Union's top court. In October 2014 Google removed more than 170,000 links to comply with the ruling that affirmed an individual's right to control what appears when their name is searched online. In July 2015, Google said it had received more than a quarter-million requests. Building on their previous work that was revealed at a 2015 IEEE Symposium and then published, Cao's and Yang's "machine unlearning" method is based on the fact that most learning systems can be converted into a form that can be updated incrementally without costly retraining from scratch. Their approach introduces a layer of a small number of summations between the learning algorithm and the training data to eliminate dependency on each other. So, the learning algorithms depend only on the summations and not on individual data. Using this method, unlearning a piece of data and its lineage would no longer require re-building the models and features that predict relationships between pieces of data. Simply re-computing a small number of summations would remove the data and its lineage completely—and much faster than through retraining the system from scratch. Cao says he believes they are the first to establish the connection between unlearning and the summation form. And, it works. Cao and Yang evaluated their unlearning approach by testing it out on four real-world systems. The diverse set of programs serves as a representative benchmark for their method and included LensKit, an open-source recommendation system; Zozzle, a closed-source JavaScript malware detector; an open-source OSN spam filter and PJScan, an open-source PDF malware detector. The success they achieved during these initial evaluations have set the stage for the next phases of the project, which include adapting the technique to other systems and creating verifiable machine unlearning to statistically test whether unlearning has indeed repaired a system or completely wiped out unwanted data. In their paper's introduction, Cao and Yang look ahead to what's next for Big Data and are convinced that "machine unlearning" could play a key role in enhancing security and privacy and in our economic future: "We foresee easy adoption of forgetting systems because they benefit both users and service providers. With the flexibility to request that systems forget data, users have more control over their data, so they are more willing to share data with the systems. More data also benefit the service providers, because they have more profit opportunities and fewer legal risks." They add: "...we envision forgetting systems playing a crucial role in emerging data markets where users trade data for money, services, or other data because the mechanism of forgetting enables a user to cleanly cancel a data transaction or rent out the use rights of her data without giving up the ownership."


News Article
Site: www.scientificcomputing.com

Associate Professor Julian Togelius works at the intersection of artificial intelligence (AI) and games — a largely unexplored juncture that he has shown can be the site of visionary and mind-expanding research. Could games provide a better AI test bed than robots, which — despite the way they excite public imagination — can be slow, unwieldy and expensive? According to him, the answer is resoundingly yes. Could an artificially intelligent operating system exhibit more originality than a human game designer? Togelius thinks so. "I'm teaching computers to be more creative than humans," he says. Togelius, a member of the NYU Tandon School of Engineering’s Department of Computer Science and Engineering, is at the forefront of the study of procedural content generation (PCG) — the process of creating game content (such as levels, maps, rules and environments) by employing algorithms, rather than direct user input. Specifically, he has pioneered the use of evolutionary algorithms for such tasks. (Evolutionary algorithms are inspired by biological functions like reproduction, mutation and natural selection.) These algorithms can work in tandem with other algorithms that recognize the player's skill and preferences to change the game on the fly. But, in addition to creating customized games that contain virtually limitless levels to conquer, Togelius has shown that PCG can even generate entirely new games from scratch. Togelius, thus, foresees game development becoming less costly (by speeding up the process and eliminating the need for human designers) and magnitudes more creative. People have a tendency to copy one another without even realizing it, as he tells the students in his undergraduate course AI for Games, and that’s a problem that could be avoided if human designers were taken out of the equation. AI — machines or software that can think and act independently in a wide variety of situations — obviously has applicability far outside of gaming, but Togelius believes that games provide fair and reliable benchmarks for AI systems under development. He is widely known for inventing and running competitions based on digital action games — such as car racing and Super Mario Bros. — that allow developers to test how truly intelligent their AI is. Such an approach has marked advantages over using robotics as an AI test bed, Togelius points out, explaining that as a doctoral candidate he had hoped to build robot software that would learn evolutionarily from its mistakes in order to develop increasingly complex and general intelligence. He soon realized, however, that in order for the robots to learn from their experiences, they would have to attempt each task thousands of times. This meant that even a simple experiment could take several days — and that was assuming that the robot didn’t break down (an unfortunately common occurrence) or behave in an inconsistent manner as its batteries depleted or motors warmed up. A potential solution might have been to build an advanced robot with more sophisticated sensors and actuators and to develop complex environments in which it could learn; that proposition would be costly, however, and would present increased chances for malfunction. “This all adds up, and quickly becomes unmanageable,” he says. “I was too ambitious and impatient for that. I wanted to create complex intelligence that could learn from experience. So, I turned to video games.” Professor Andy Nealen, who co-directs Tandon’s Game Innovation Lab, stresses the far-ranging implications of that change in approach. "Togelius’s work is now leading to algorithms capable of better-than-human decision making — a result with implications for games and beyond," he says. Togelius, who mentors several BS/MS students at the Game Innovation Lab, is providing ample proof that games are among the most relevant and important domains for anyone hoping to work seriously with AI.


Apps allow you to link your smartphone to anything from your shoes, to your jewelry, to your doorbell — and soon, you may be able to add your contact lenses to that list. Engineers at the University of Washington have developed an innovative way of communicating that would allow medical aids such as contact lenses and brain implants to send signals to smartphones. The new tech, called "interscatter communication," works by converting Bluetooth signals into Wi-Fi signals, the engineers wrote in a paper that will be presented Aug. 22 at the Association for Computing Machinery's Special Interest Group on Data Communication conference in Brazil. [Bionic Humans: Top 10 Technologies] "Instead of generating Wi-Fi signals on your own, our technology creates Wi-Fi by using Bluetooth transmissions from nearby mobile devices such as smartwatches," study co-author Vamsi Talla, a research associate in the Department of Computer Science and Engineering at the University of Washington, said in a statement. Interscatter communication is based on an existing method of communication called backscatter, which lets devices exchange information by reflecting back existing signals. "Interscatter" works essentially the same way, but the difference is that it allows for inter-technology communication — in other words, it allows Bluetooth signals and Wi-Fi signals to talk to each other. Interscatter communication would allow devices such as contact lenses to send data to other devices, according to the researchers. Until now, such communication had not been possible, because sending data using Wi-Fi requires too much power for a device like a contact lens. To demonstrate interscatter communication, the engineers designed a contact lens equipped with a tiny antenna. The Bluetooth signal, in this case, came from a smartwatch. The antenna on the contact lens was able to manipulate that Bluetooth signal, encode data from the contact lens and convert it into a Wi-Fi signal that could be read by another device. And though the concept of "smart" contact lenses may seem a bit gimmicky, they could, in fact, provide valuable medical information to patients. For example, it is possible to monitor blood sugar levels from a person's tears. Therefore, a connected contact lens could track blood sugar levels and send notifications to a person's phone when blood sugar levels went down, study co-author Vikram Iyer, a doctoral student in electrical engineering, also at the University of Washington, said in a statement. (Monitoring blood sugar levels is important for people with diabetes.) The researchers also said interscatter communication could be used to transmit data from brain implants that could one day help people with paralysis regain movement. Not all of the potential applications are related to medical devices, however. Interscatter communication could also exchange information between credit cards, the researchers wrote. This would allow people to transfer money between cards by simply holding them near a smartphone, for example, they said. Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.


News Article
Site: www.scientificcomputing.com

Researcher David Silver and colleagues designed a computer program capable of beating a top-level Go player — a marvelous technological feat and important threshold in the development of artificial intelligence, or AI. It stresses once more that humans aren’t at the center of the universe, and that human cognition isn’t the pinnacle of intelligence. I remember well when IBM’s computer Deep Blue beat chess master Garry Kasparov. Where I’d played — and lost to — chess-playing computers myself, the Kasparov defeat solidified my personal belief that artificial intelligence will become reality, probably even in my lifetime. I might one day be able to talk to things similar to my childhood heroes C-3PO and R2-D2. My future house could be controlled by a program like HAL from Kubrick’s “2001” movie. Not the best automated-home controller: HAL. As a researcher in artificial intelligence, I realize how impressive it is to have a computer beat a top Go player, a much tougher technical challenge than winning at chess. Yet it’s still not a big step toward the type of artificial intelligence used by the thinking machines we see in the movies. For that, we need new approaches to developing AI. To understand the limitations of the Go milestone, we need to think about what artificial intelligence is — and how the research community makes progress in the field. Typically, AI is part of the domain of engineering and computer science, a field in which progress is measured not by how much we learned about nature or humans, but by achieving a well-defined goal: if the bridge can carry a 120-ton truck, it succeeds. Beating a human at Go falls into exactly that category. I take a different approach. When I talk about AI, I typically don’t talk about a well-defined matter. Rather, I describe the AI that I would like to have as “a machine that has cognitive abilities comparable to that of a human.” Admittedly, that is a very fuzzy goal, but that is the whole point. We can’t engineer what we can’t define, which is why I think the engineering approach to “human level cognition” — that is, writing smart algorithms to solve a particularly well-defined problem — isn’t going to get us where we want to go. But then what is? We can’t wait for cognitive- and neuroscience, behavior biology or psychology to figure out what the brain does and how it works. Even if we wait, these sciences will not come up with a simple algorithm explaining the human brain. What we do know is that the brain wasn’t engineered with a simple modular building plan in mind. It was cobbled together by Darwinian evolution — an opportunistic mechanism governed by the simple rule that whoever makes more viable offspring wins the race. This explains why I work on the evolution of artificial intelligence and try to understand the evolution of natural intelligence. I make a living out of evolving digital brains. Divergent evolution: These two figures show maps of different evolutions of connections between digital brain parts, 49,000 generations after they both began at the same starting point. Arend Hintze, CC BY To return to the Go algorithm: in the context of computer games, improving skill is possible only by playing against a better competitor. The Go victory shows that we can make better algorithms for more complex problems than before. That, in turn, suggests that, in the future, we could see more computer games with complex rules providing better opponent AI against human players. Chess computers have changed how modern chess is played, and we can expect a similar effect for Go and its players. This new algorithm provides a way to define optimal play, which is probably good if you want to learn Go or improve your skills. However, since this new algorithm is pretty much the best possible Go player on Earth, playing against it nearly guarantees you’ll lose. That’s no fun. Fortunately, continuous loss doesn’t have to happen. The computer’s controllers can make the algorithm play less well by either reducing the number of moves it thinks ahead, or — and this is really new — using a less-developed deep neural net to evaluate the Go board. But does this make the algorithm play more like a human, and is that what we want in a Go player? Let us turn to other games that have fewer fixed rules and instead require the player to improvise more. Imagine a first person shooter, or a multiplayer battle game, or a typical role-playing adventure game. These games became popular not because people could play them against better AI, but because they can be played against, or together with, other human beings. It seems as if we are not necessarily looking for strength and skill in opponents we play, but for human characteristics like being able to surprise us, to see the same humor and maybe to even empathize with us. For example, I recently played Journey, a game where the only way other online players can interact with each other is by singing a particular tune that each can hear and see. This is a creative and emotional way for a player to look at the beautiful art of that game and share important moments of its story with someone else. Playing with your emotions: In the video game Journey, intercharacter connection is a key feature. Journey/That Game Company, CC BY-ND It is the emotional connection that makes this experience remarkable, and not the skill of the other player. If the AI that controls other players evolved, it may go through the same steps that made our brain work. That could include sensing emotional equivalents to fear, warning about undetermined threats, and probably also empathy to understand other organisms and their needs. It is this, and the AI’s ability to do different things instead of being a specialist in just one realm, that I am looking for in AI. We might, therefore, need to incorporate the process of how we became us into the process of how we make our digital counterparts. Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University. This article was originally published on The Conversation. Read the original article.


Associate Professor Julian Togelius works at the intersection of artificial intelligence (AI) and games—a largely unexplored juncture that he has shown can be the site of visionary and mind-expanding research. Could games provide a better AI test bed than robots, which—despite the way they excite public imagination—can be slow, unwieldy and expensive? According to him, the answer is resoundingly yes. Could an artificially intelligent operating system exhibit more originality than a human game designer? Togelius thinks so. "I'm teaching computers to be more creative than humans," he says. Togelius, a member of the NYU Tandon School of Engineering's Department of Computer Science and Engineering, is at the forefront of the study of procedural content generation (PCG)—the process of creating game content (such as levels, maps, rules, and environments) by employing algorithms, rather than direct user input. Specifically, he has pioneered the use of evolutionary algorithms for such tasks. (Evolutionary algorithms are inspired by biological functions like reproduction, mutation, and natural selection.) These algorithms can work in tandem with other algorithms that recognize the player's skill and preferences to change the game on the fly. But In addition to creating customized games that contain virtually limitless levels to conquer, Togelius has shown that PCG can even generate entirely new games from scratch. Togelius thus foresees game development becoming less costly (by speeding up the process and eliminating the need for human designers) and magnitudes more creative. People have a tendency to copy one another without even realizing it, as he tells the students in his undergraduate course AI for Games, and that's a problem that could be avoided if human designers were taken out of the equation. AI—machines or software that can think and act independently in a wide variety of situations—obviously has applicability far outside of gaming, but Togelius believes that games provide fair and reliable benchmarks for AI systems under development. He is widely known for inventing and running competitions based on digital action games—such as car racing and Super Mario Bros.—that allow developers to test how truly intelligent their AI is. Such an approach has marked advantages over using robotics as an AI test bed, Togelius points out, explaining that as a doctoral candidate he had hoped to build robot software that would learn evolutionarily from its mistakes in order to develop increasingly complex and general intelligence. He soon realized, however, that in order for the robots to learn from their experiences, they would have to attempt each task thousands of times. This meant that even a simple experiment could take several days—and that was assuming that the robot didn't break down (an unfortunately common occurrence) or behave in an inconsistent manner as its batteries depleted or motors warmed up. A potential solution might have been to build an advanced robot with more sophisticated sensors and actuators and to develop complex environments in which it could learn; that proposition would be costly, however, and would present increased chances for malfunction. "This all adds up, and quickly becomes unmanageable," he says. "I was too ambitious and impatient for that. I wanted to create complex intelligence that could learn from experience. So I turned to video games." Professor Andy Nealen, who co-directs Tandon's Game Innovation Lab, stresses the far-ranging implications of that change in approach. "Togelius's work is now leading to algorithms capable of better-than-human decision making—a result with implications for games and beyond," he says. Togelius, who mentors several BS/MS students at the Game Innovation Lab, is providing ample proof that games are among the most relevant and important domains for anyone hoping to work seriously with AI. Explore further: PhD student creates AI machine that can write video games

Discover hidden collaborations