For the last decade, scientists have deployed increasingly capable underwater robots to map and monitor pockets of the ocean to track the health of fisheries, and survey marine habitats and species. In general, such robots are effective at carrying out low-level tasks, specifically assigned to them by human engineers — a tedious and time-consuming process for the engineers. When deploying autonomous underwater vehicles (AUVs), much of an engineer’s time is spent writing scripts, or low-level commands, in order to direct a robot to carry out a mission plan. Now a new programming approach developed by MIT engineers gives robots more “cognitive” capabilities, enabling humans to specify high-level goals, while a robot performs high-level decision-making to figure out how to achieve these goals. For example, an engineer may give a robot a list of goal locations to explore, along with any time constraints, as well as physical directions, such as staying a certain distance above the seafloor. Using the system devised by the MIT team, the robot can then plan out a mission, choosing which locations to explore, in what order, within a given timeframe. If an unforeseen event prevents the robot from completing a task, it can choose to drop that task, or reconfigure the hardware to recover from a failure, on the fly. In March, the team tested the autonomous mission-planning system during a research cruise off the western coast of Australia. Over three weeks, the MIT engineers, along with groups from Woods Hole Oceanographic Institution, the Australian Center for Field Robotics, the University of Rhode Island, and elsewhere, tested several classes of AUVs, and their ability to work cooperatively to map the ocean environment. The MIT researchers tested their system on an autonomous underwater glider, and demonstrated that the robot was able to operate safely among a number of other autonomous vehicles, while receiving higher-level commands. The glider, using the system, was able to adapt its mission plan to avoid getting in the way of other vehicles, while still achieving its most important scientific objectives. If another vehicle was taking longer than expected to explore a particular area, the glider, using the MIT system, would reshuffle its priorities, and choose to stay in its current location longer, in order to avoid potential collisions. “We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support,” says Brian Williams, a professor of aeronautics and astronautics at MIT, and principal developer of the mission-planning system. “With this system, we were showing we could safely zigzag all the way around the reef, like an obstacle course.” Williams and his colleagues will present the mission-planning system in June at the International Conference on Automated Planning and Scheduling, in Israel. When developing the autonomous mission-planning system, Williams’ group took inspiration from the “Star Trek” franchise and the top-down command center of the fictional starship Enterprise, after which Williams modeled and named the system. Just as a hierarchical crew runs the fictional starship, Williams’ Enterprise system incorporates levels of decision-makers. For instance, one component of the system acts as a “captain,” making higher-level decisions to plan out the overall mission, deciding where and when to explore. Another component functions as a “navigator,” planning out a route to meet mission goals. The last component works as a “doctor,” or “engineer,” diagnosing and repairing problems autonomously. “We can give the system choices, like, ‘Go to either this or that science location and map it out,’ or ‘Communicate via an acoustic modem, or a satellite link,’” Williams explains. “What the system does is, it makes those choices, but makes sure it satisfies all the timing constraints and doesn’t collide with anything along the way. So it has the ability to adapt to its environment.” The system is similar to one that Williams developed for NASA following the loss of the Mars Observer, a spacecraft that, days before its scheduled insertion into Mars’ orbit in 1993, lost contact with NASA. “There were human operators on Earth who were experts in diagnosis and repair, and were ready to save the spacecraft, but couldn’t communicate with it,” Williams recalls. “Subsequently, NASA realized they needed systems that could reason at the cognitive level like engineers, but that were onboard the spacecraft.” Williams, who at the time was working at NASA’s Ames Research Center, was tasked with developing an autonomous system that would enable spacecraft to diagnose and repair problems without human assistance. The system was successfully tested on NASA’s Deep Space 1 probe, which performed an asteroid flyby in 1999. “That was the first chance to demonstrate goal-directed autonomy in deep space,” Williams says. “This was a chance to do the same thing under the sea.” By giving robots control of higher-level decision-making, Williams says such a system would free engineers to think about overall strategy, while AUVs determine for themselves a specific mission plan. Such a system could also reduce the size of the operational team needed on research cruises. And, most significantly from a scientific standpoint, an autonomous planning system could enable robots to explore places that otherwise would not be traversable. For instance, with an autonomous system, robots may not have to be in continuous contact with engineers, freeing the vehicles to explore more remote recesses of the sea. “If you look at the ocean right now, we can use Earth-orbiting satellites, but they don’t penetrate much below the surface,” Williams says. “You could send sea vessels which send one autonomous vehicle, but that doesn’t show you a lot. This technology can offer a whole new way to observe the ocean, which is exciting.” This research was funded in part by Schmidt Ocean Institute. The underlying technology was supported in part by Boeing Co., the Keck Institute of Space Sciences, the Defense Advanced Research Projects Agency, and NASA.
Manchester I.R.,Australian Center for Field Robotics |
Umenberger J.,Australian Center for Field Robotics
Proceedings - IEEE International Conference on Robotics and Automation | Year: 2014
We present an algorithm for receding-horizon motion planning using a finite family of motion primitives for underactuated dynamic walking over uneven terrain. The motion primitives are defined as virtual holonomic constraints, and the special structure of underactuated mechanical systems operating subject to virtual constraints is used to construct closed-form solutions and a special binary search tree that dramatically speed up motion planning. We propose a greedy depth-first search and discuss improvement using energy-based heuristics. The resulting algorithm can plan several footsteps ahead in a fraction of a second for both the compass-gait walker and a planar 7-Degree-of-freedom/five-link walker. © 2014 IEEE.
Kinsey J.C.,Woods Hole Oceanographic Institution |
Yoerger D.R.,Woods Hole Oceanographic Institution |
Jakuba M.V.,Australian Center for Field Robotics |
Camilli R.,Woods Hole Oceanographic Institution |
And 2 more authors.
IEEE International Conference on Intelligent Robots and Systems | Year: 2011
This paper reports the Sentry autonomous underwater vehicle and its deployment on two cruises in response to the Deepwater Horizon oil spill. The first cruise, in June 2010, coupled Sentry with the TETHYS mass spectrometer to track and localize a subsea hydrocarbon plume at a depth of approximately 1100m going at least 30km from the oil spill site. In December 2010, Sentry mapped and photographed deep-sea biological communities for follow-up observations and sampling with the Alvin manned submersible. These cruises demonstrate how robots and novel sensing technologies contributed to the oil spill assessment and the broader impact of technologies developed for basic research. © 2011 IEEE.
Lawrance N.R.J.,University of Sydney |
Lawrance N.R.J.,Australian Center for Field Robotics |
Sukkarieh S.,University of Sydney |
Sukkarieh S.,Australian Center for Field Robotics
Journal of Guidance, Control, and Dynamics | Year: 2011
Soaring is the process of exploiting favorable wind conditions to extend flight duration. This paper presents an approach for simultaneously mapping and using a wind field for soaring with an unpowered aircraft. Previous research by the authors and others has addressed soaring in known wind fields. However, an adequate estimate of the wind field is required in order to generate energy-gain paths. Conversely, the exploration required to generate a useful map estimate requires energy. This work aims to address these problems simultaneously by attempting to maintain and improve a model-free wind map based on observations collected during the flight and to use the currently available map to generate energy-gain paths. Wind estimation is performed using Gaussian process regression.Apath planner generates and selects paths based on energy efficiency and field exploration. The method is tested in simulation with wind fields consisting of single and multiple stationary thermal bubbles. The use of soaring flight with consistent improvement in map quality and accuracy is demonstrated in a number of scenarios. Copyright © 2011 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.
Matheson E.,Australian Center for Field Robotics |
Brooker G.,Australian Center for Field Robotics
Acta Astronautica | Year: 2012
During extravehicular activities (EVAs), pressurised space suits can lead to difficulties in performing hand manoeuvres and fatigue. This is often the cause of EVAs being terminated early, or taking longer to complete. Assistive robotic gloves can be used to augment the natural motion of a human hand, meaning work can be carried out more efficiently with less stress to the astronaut. Lightweight and low profile solutions must be found in order for the assistive robotic glove to be easily integrated with a space suit pressure garment. Pneumatic muscle actuators combined with force sensors are one such solution. These actuators are extremely light, yet can output high forces using pressurised gases as the actuation drive. Their movement is omnidirectional, so when combined with a flexible exoskeleton that itself provides a degree of freedom of movement, individual fingers can be controlled during flexion and extension. This setup allows actuators and other hardware to be stored remotely on the user's body, resulting in the least possible mass being supported by the hand. Two prototype gloves have been developed at the University of Sydney; prototype I using a fibreglass exoskeleton to provide flexion force, and prototype II using torsion springs to achieve the same result. The gloves have been designed to increase the ease of human movements, rather than to add unnatural ability to the hand. A state space control algorithm has been developed to ensure that human initiated movements are recognised, and calibration methods have been implemented to accommodate the different characteristics of each wearer's hands. For this calibration technique, it was necessary to take into account the natural tremors of the human hand which may have otherwise initiated unexpected control signals. Prototype I was able to actuate the user's hand in 1 degree of freedom (DOF) from full flexion to partial extension, and prototype II actuated a user's finger in 2 DOF with forces achieved comparable to those of a natural, healthy hand. The minimum mass held by the user on the hand was 240 g, with remote hardware, including a compressed air bottle, having a further mass of 1. 6 kg. These results indicate that the design is able to augment human motion in a low profile, low mass package, and could be a valuable addition to a space suit during an EVA. © 2012 Elsevier Ltd. All rights reserved.