Pasadena, CA, United States
Pasadena, CA, United States

Time filter

Source Type

News Article | December 1, 2016

If you think operating a robot in space is hard, try doing it in the ocean. Saltwater can corrode your robot and block its radio signals. Kelp forests can tangle it up, and you might not get it back. Sharks will even try to take bites out of its wings. The ocean is basically a big obstacle course of robot death. Despite this, robotic submersibles have become critical tools for ocean research. While satellites can study the ocean surface, their signals can't penetrate the water. A better way to study what's below is to look beneath yourself -- or send a robot in your place. That's why a team of researchers from NASA and other institutions recently visited choppy waters in Monterey Bay, California. Their ongoing research is developing artificial intelligence for submersibles, helping them track signs of life below the waves. Doing so won't just benefit our understanding of Earth's marine environments; the team hopes this artificial intelligence will someday be used to explore the icy oceans believed to exist on moons like Europa. If confirmed, these oceans are thought to be some of the most likely places to host life in the outer solar system. A fleet of six coordinated drones was used to study Monterey Bay. The fleet roved for miles seeking out changes in temperature and salinity. To plot their routes, forecasts of these ocean features were sent to the drones from shore. The drones also sensed how the ocean actively changed around them. A major goal for the research team is to develop artificial intelligence that seamlessly integrates both kinds of data. "Autonomous drones are important for ocean research, but today's drones don't make decisions on the fly," said Steve Chien, one of the research team's members. Chien leads the Artificial Intelligence Group at NASA's Jet Propulsion Laboratory, Pasadena, California. "In order to study unpredictable ocean phenomena, we need to develop submersibles that can navigate and make decisions on their own, and in real-time. Doing so would help us understand our own oceans -- and maybe those on other planets." Other research members hail from Caltech in Pasadena; the Monterey Bay Aquarium Research Institute, Moss Landing, California; Woods Hole Oceanographic Institute, Woods Hole, Massachusetts; and Remote Sensing Solutions, Barnstable, Massachusetts. If successful, this project could lead to submersibles that can plot their own course as they go, based on what they detect in the water around them. That could change how we collect data, while also developing the kind of autonomy needed for planetary exploration, said Andrew Thompson, assistant professor of environmental science and engineering at Caltech. "Our goal is to remove the human effort from the day-to-day piloting of these robots and focus that time on analyzing the data collected," Thompson said. "We want to give these submersibles the freedom and ability to collect useful information without putting a hand in to correct them." At the smallest levels, marine life exists as "biocommunities." Nutrients in the water are needed to support plankton; small fish follow the plankton; big fish follow them. Find the nutrients, and you can follow the breadcrumb trail to other marine life. But that's easier said than done. Those nutrients are swept around by ocean currents, and can change direction suddenly. Life under the sea is constantly shifting in every direction, and at varying scales of size. "It's all three dimensions plus time," Chien said about the challenges of tracking ocean features. "Phenomena like algal blooms are hundreds of kilometers across. But small things like dinoflagellate clouds are just dozens of meters across." It might be easy for a fish to track these features, but it's nearly impossible for an unintelligent robot. "Truly autonomous fleets of robots have been a holy grail in oceanography for decades," Thompson said. "Bringing JPL's exploration and AI experience to this problem should allow us to lay the groundwork for carrying out similar activities in more challenging regions, like Earth's polar regions and even oceans on other planets." The recent field work at Monterey Bay was funded by JPL and Caltech's Keck Institute for Space Studies (KISS). Additional research is planned in the spring of 2007. Caltech in Pasadena, California, manages JPL for NASA. For more information about this research, visit:

Hayden D.S.,Jet Propulsion Laboratory | Chien S.,Artificial Intelligence Group | Thompson D.R.,Machine Learning and Instrument Autonomy Group | Castano R.,Instrument Software and Science Data Systems Section
IEEE Intelligent Systems | Year: 2010

Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and highresolution, multispectral imaging devices easily produce data exceeding the available bandwidth. As an example, the HiRISE camera aboard the Mars Reconnaissance Orbiter produces images of up to 16.4 Gbits in data volume but downlink bandwidth is limited to 6 Mbits per second (Mbps) © 2010 IEEE.

Rabideau G.,Jet Propulsion Laboratory | Rabideau G.,Artificial Intelligence Group | Chien S.,Jet Propulsion Laboratory | Chien S.,Artificial Intelligence Group | And 2 more authors.
Journal of Aerospace Computing, Information and Communication | Year: 2011

We describe an efficient, online goal selection algorithm and its use for selecting goals at runtime. Our focus is on the replanning that must be performed in a timely manner on the embedded system where computational resources are limited (as in many aerospace systems). In particular, our algorithm generates near optimal solutions to problems with fully specified goal requests that can oversubscribe available resources but have no temporal flexibility. By using a fast, incremental algorithm, goal selection can be postponed in a "justin- time" fashion allowing requests to be changed or added at the last minute. This enables shorter response cycles and greater autonomy for the system under control.We show that the average case complexity for updating the goals set isO(N lgN) and runtime execution isO(N). We perform an empirical analysis on both synthetic data and space operations mission-like scenarios that confirm these performance characteristics. Finally, we show that scaling these performance figures to existing, very limited onboard spacecraft embedded environments (a Mars Reconnaissance Orbiter like environment) appears feasible. Copyright © 2011 by the American Institute of Aeronautics and Astronautics, Inc.

Poschmann P.,Artificial Intelligence Group | Hellbach S.,Artificial Intelligence Group | Bohme H.-J.,Artificial Intelligence Group
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

We propose a system for enhancing a human-robot interaction system. The goal is to embrace surrounding persons into a conversation with the robot by means of establishing or keeping eye contact. For this, a simple and hence computationally efficient people tracking algorithm has been developed, which in itself is an enhancement of existing approaches. As the foundation of our approach we use a vanilla Kalman filter that explicitly models the velocity in the state space, while the observations are transformed into a unified global coordinate system. We evaluate different integration strategies for multiple sensor cues. Furthermore, we extend an algorithm for group detection to be able to recognize individuals. © Springer-Verlag Berlin Heidelberg 2012.

Lyras D.P.,Artificial Intelligence Group | Kotinas I.,Artificial Intelligence Group | Sgarbas K.,Artificial Intelligence Group | Fakotakis N.,Artificial Intelligence Group
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Greek to Greeklish transcription does not appear to be a difficult task since it can be achieved by directly mapping each Greek character to a corresponding symbol of the Latin alphabet. Nevertheless, such transliteration systems do not simulate efficiently the human way of Greeklish writing, since Greeklish users do not follow a standardized way of transliteration. In this paper a stochastic Greek to Greeklish transcriber modeled by real user data is presented. The proposed transcriber employs knowledge derived from the analytical processing of 9,288 Greek-Greeklish word pairs annotated by real users and achieves the automatic transcription of any Greek word into a valid Greeklish form in a stochastic way (i.e. each Greek symbolset corresponds to a variety of Latin symbols according to the processed data), simulating thus human-like behavior. This transcriber could be used as a real-time Greek-to-Greeklish transcriber and/or as a data generator engine used for the performance evaluation of Greeklish-to-Greek transliteration systems. © Springer-Verlag Berlin Heidelberg 2010.

Ganchev T.,Artificial Intelligence Group | Mporas I.,Artificial Intelligence Group | Fakotakis N.,Artificial Intelligence Group
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Aiming at the automatic estimation of the height of a person from speech, we investigate the applicability of various subsets of speech features, which were formed on the basis of ranking the relevance and the individual quality of numerous audio features. Specifically, based on the relevance ranking of the large set of openSMILE audio descriptors, we performed selection of subsets with different sizes and evaluated them on the height estimation task. In brief, during the speech parameterization process, every input utterance is converted to a single feature vector, which consists of 6552 parameters. Next, a subset of this feature vector is fed to a support vector machine (SVM)-based regression model, which aims at the straight estimation of the height of an unknown speaker. The experimental evaluation performed on the TIMIT database demonstrated that: (i) the feature vector composed of the top-50 ranked parameters provides a good trade-off between computational demands and accuracy, and that (ii) the best accuracy, in terms of mean absolute error and root mean square error, is observed for the top-200 subset. © Springer-Verlag Berlin Heidelberg 2010.

Breuing A.,Artificial Intelligence Group | Waltinger U.,Artificial Intelligence Group | Wachsmuth I.,Artificial Intelligence Group
Proceedings - 2011 IEEE/WIC/ACM International Conference on Web Intelligence, WI 2011 | Year: 2011

This paper introduces a model harvesting the crowdsourced encyclopedic knowledge provided by Wikipedia to improve the conversational abilities of an artificial agent. More precisely, we present a model for automatic topic identification in ongoing natural language dialogs. On the basis of a graphbased representation of theWikipedia category system, our model implements six tasks essential for detecting the topical overlap of coherent dialog contributions. Thereby the identification process operates online to handle dialog streams of constantly changing topical threads in real-time. The realization of the model and its application to our conversational agent aims to improve humanagent conversations by transferring human-like topic awareness to the artificial interlocutor. © 2011 IEEE.

Mporas I.,Artificial Intelligence Group | Ganchev T.,Artificial Intelligence Group | Fakotakis N.,Artificial Intelligence Group
Computer Speech and Language | Year: 2010

In the present work we study the appropriateness of a number of linear and non-linear regression methods, employed on the task of speech segmentation, for combining multiple phonetic boundary predictions which are obtained through various segmentation engines. The proposed fusion schemes are independent of the implementation of the individual segmentation engines as well as from their number. In order to illustrate the practical significance of the proposed approach, we employ 112 speech segmentation engines based on hidden Markov models (HMMs), which differ in the setup of the HMMs and in the speech parameterization techniques they employ. Specifically we relied on sixteen different HMMs setups and on seven speech parameterization techniques, four of which are recent and their performance on the speech segmentation task have not been evaluated yet. In the evaluation experiments we contrast the performance of the proposed fusion schemes for phonetic boundary predictions against some recently reported methods. Throughout this comparison, on the established for the phonetic segmentation task TIMIT database, we demonstrate that the support vector regression scheme is capable of achieving more accurate predictions, when compared to other fusion schemes reported so far. © 2009 Elsevier Ltd. All rights reserved.

Mporas I.,Artificial Intelligence Group | Kocsis O.,Artificial Intelligence Group | Ganchev T.,Artificial Intelligence Group | Fakotakis N.,Artificial Intelligence Group
Expert Systems with Applications | Year: 2010

Aiming at robust spoken dialogue interaction in motorcycle environment, we investigate various configurations for a speech front-end, which consists of speech pre-processing, speech enhancement and speech recognition components. These components are implemented as agents in the Olympus/RavenClaw framework, which is the core of a multimodal dialogue interaction interface of a wearable solution for information support of the motorcycle police force on the move. In the present effort, aiming at optimizing the speech recognition performance, different experimental setups are considered for the speech front-end. The practical value of various speech enhancement techniques is assessed and, after analysis of their performances, a collaborative scheme is proposed. In this collaborative scheme independent speech enhancement channels operate in parallel on a common input and their outputs are fed to the multithread speech recognition component. The outcome of the speech recognition process is post-processed by an appropriate fusion technique, which contributes for a more accurate interpretation of the input. Investigating various fusion algorithms, we identified the Adaboost.M1 algorithm as the one performing best. Utilizing the fusion collaborative scheme based on the Adaboost.M1 algorithm, significant improvement of the overall speech recognition performance was achieved. This is expressed in terms of word recognition rate and correctly recognized words, as accuracy gain of 8.0% and 5.48%, respectively, when compared to the performance of the best speech enhancement channel, alone. The advance offered in the present work reaches beyond the specifics of the present application, and can be beneficial to spoken interfaces operating in non-stationary noise environments. © 2009 Elsevier Ltd. All rights reserved.

News Article | January 26, 2016

BOSTON (Reuters) - Marvin Minsky, the artificial intelligence pioneer who helped make machines think, leading to computers that understand spoken commands and beat grandmasters at chess, has died at the age of 88, the Massachusetts Institute of Technology said. Minsky, who died on Sunday, suffered a cerebral hemorrhage, the school said. Minsky had "a monster brain," MIT colleague Patrick Winston, a professor of artificial intelligence and computer science, said in a 2012 interview. He could be intimidating without meaning to be because he was "such a genius," Winston said. Minsky's greatest contribution to computers and artificial intelligence was the notion that neither human nor machine intelligence is a single process. Instead, he argued, intelligence arises from the interaction of numerous processes in a "society of mind" - a phrase Minsky used for the title of his 1985 book. "Marvin basically figured out that thinking isn't a thing but an embarrassing mess of dumb things that work together, as in a society," said Danny Hillis, a former Minsky student and now co-chairman of the Applied Minds technology company. Minsky's insight led to the development of smart machines packed with individual modules that give them specific capabilities, such as computers that play grandmaster-level chess, robots that build cars, programs that analyze DNA and software that creates lifelike dinosaurs, explosions and extraterrestrial worlds for movies. Artificial intelligence is also essential to almost every computer function, from web search to video games, and tasks such as filtering spam email, focusing cameras, translating documents and giving voice commands to smartphones. Minsky was co-founder in 1959 of the now-legendary Artificial Intelligence Group at MIT. He also built the first computer capable of learning through connections that mimic human neurons. Minsky lent his expertise to one of culture's most notorious thinking machines - the HAL 9000 computer from the book and film "2001: A Space Odyssey" that turned against its astronaut masters. Minsky served as an adviser for the movie, which he called "the most awesome film I'd ever seen." Minsky, who was born in New York City in 1927, was drawn to science and engineering as a child, enthralled by the works of Jules Verne and H.G. Wells. He also composed music in the style of Bach - an interest he pursued into his later years. Minsky graduated from Harvard in 1950 with a degree in mathematics and in 1954 earned a Ph.D. in math from Princeton University.

Loading Artificial Intelligence Group collaborators
Loading Artificial Intelligence Group collaborators