News Article | May 15, 2017
Georgia Institute of Technology researchers have created a team of free-flying robots that obeys the two rules of the air: don't collide or undercut each other. They've also built autonomous blimps that recognize hand gestures and detect faces. Both projects will be presented at the 2017 IEEE International Conference on Robotics and Automation (ICRA) May 29 - June 3 in Singapore. In the first, five swarm quadcopters zip back and forth in formation, then change their behaviors based on user commands. The trick is to maneuver without smacking into each other or flying underneath another machine. If a robot cuts into the airstream of a higher flying quadcopter, the lower machine must quickly recover from the turbulent air or risk falling out of the sky. "Ground robots have had built-in safety 'bubbles' around them for a long time to avoid crashing," said Magnus Egerstedt, the Georgia Tech School of Electrical and Computer Engineering professor who oversees the project. "Our quadcopters must also include a cylindrical 'do not touch' area to avoid messing up the airflow for each other. They're basically wearing virtual top hats." As long as the Georgia Tech machines avoid flying in the two-foot space below their neighbor, they can swarm freely without a problem. That typically means they dart around each other rather than going low. Ph.D. student Li Wang figured out the size of the "top hat" one afternoon by hovering one copter in the air and sending others back and forth underneath it. Any closer than 0.6 of a meter (or five times the diameter from one rotor to another) and the machines were blasted to the ground. Then he created algorithms to allow them to change formation midflight. "We figured out the smallest amount of modifications a quadcopter must make to its planned path to achieve the new formation," said Wang. "Mathematically, that's what a programmer wants -- the smallest deviations from an original flight plan." The project is part of Egerstedt and Wang's overall research, which focuses on easily controlling and interacting with large teams of robots. "Our skies will become more congested with autonomous machines, whether they're used for deliveries, agriculture or search and rescue," said Egerstedt, who directs Georgia Tech's Institute for Robotics and Intelligent Machines. "It's not possible for one person to control dozens or hundreds of robots at a time. That's why we need machines to figure it out themselves." The researchers overseeing the second project, the blimps, 3D-printed a gondola frame that carries sensors and a mini camera. It attaches to either an 18- or 36-inch diameter balloon. The smaller blimp can carry a five-gram payload; the larger one supports 20 grams. The autonomous blimps detect faces and hands, allowing people to direct the flyers with movements. All the while, the machine gathers information about its human operator, identifying everything from hesitant glares to eager smiles. The goal is to better understand how people interact with flying robots. "Roboticists and psychologists have learned many things about how humans relate to robots on the ground, but we haven't created techniques to study how we react to flying machines," said Fumin Zhang, the Georgia Tech associate professor leading the blimp project. "Flying a regular drone close to people presents a host of issues. But people are much more likely to approach and interact with a slow-moving blimp that looks like a toy." The blimps' circular shape makes them harder to steer with manual controllers, but allows them to turn and quickly change direction. This is unlike the more popular zeppelin-shaped blimps commonly used by other researchers. Zhang has filed a request with Guinness World Records for the smallest autonomous blimp. He sees a future where blimps can play a role in people's lives, but only if roboticists can determine what people want and how they'll react to a flying companion. "Imagine a blimp greeting you at the front of the hardware store, ready to offer assistance," Zhang said. "People are good at reading people's faces and sensing if they need help or not. Robots could do the same. And if you needed help, the blimp could ask, then lead you to the correct aisle, flying above the crowds and out of the way."
News Article | May 16, 2017
Georgia Institute of Technology researchers have created a team of free-flying robots that obeys the two rules of the air: don't collide or undercut each other. They've also built autonomous blimps that recognize hand gestures and detect faces. Both projects will be presented at the 2017 IEEE International Conference on Robotics and Automation (ICRA) May 29 -- June 3 in Singapore. In the first, five swarm quadcopters zip back and forth in formation, then change their behaviors based on user commands. The trick is to maneuver without smacking into each other or flying underneath another machine. If a robot cuts into the airstream of a higher flying quadcopter, the lower machine must quickly recover from the turbulent air or risk falling out of the sky. "Ground robots have had built-in safety 'bubbles' around them for a long time to avoid crashing," said Magnus Egerstedt, the Georgia Tech School of Electrical and Computer Engineering professor who oversees the project. "Our quadcopters must also include a cylindrical 'do not touch' area to avoid messing up the airflow for each other. They're basically wearing virtual top hats." As long as the Georgia Tech machines avoid flying in the two-foot space below their neighbor, they can swarm freely without a problem. That typically means they dart around each other rather than going low. Ph.D. student Li Wang figured out the size of the "top hat" one afternoon by hovering one copter in the air and sending others back and forth underneath it. Any closer than 0.6 of a meter (or five times the diameter from one rotor to another) and the machines were blasted to the ground. Then he created algorithms to allow them to change formation midflight. "We figured out the smallest amount of modifications a quadcopter must make to its planned path to achieve the new formation," said Wang. "Mathematically, that's what a programmer wants -- the smallest deviations from an original flight plan." The project is part of Egerstedt and Wang's overall research, which focuses on easily controlling and interacting with large teams of robots. "Our skies will become more congested with autonomous machines, whether they're used for deliveries, agriculture or search and rescue," said Egerstedt, who directs Georgia Tech's Institute for Robotics and Intelligent Machines. "It's not possible for one person to control dozens or hundreds of robots at a time. That's why we need machines to figure it out themselves." The researchers overseeing the second project, the blimps, 3D-printed a gondola frame that carries sensors and a mini camera. It attaches to either an 18- or 36-inch diameter balloon. The smaller blimp can carry a five-gram payload; the larger one supports 20 grams. The autonomous blimps detect faces and hands, allowing people to direct the flyers with movements. All the while, the machine gathers information about its human operator, identifying everything from hesitant glares to eager smiles. The goal is to better understand how people interact with flying robots. "Roboticists and psychologists have learned many things about how humans relate to robots on the ground, but we haven't created techniques to study how we react to flying machines," said Fumin Zhang, the Georgia Tech associate professor leading the blimp project. "Flying a regular drone close to people presents a host of issues. But people are much more likely to approach and interact with a slow-moving blimp that looks like a toy." The blimps' circular shape makes them harder to steer with manual controllers, but allows them to turn and quickly change direction. This is unlike the more popular zeppelin-shaped blimps commonly used by other researchers. Zhang has filed a request with Guinness World Records for the smallest autonomous blimp. He sees a future where blimps can play a role in people's lives, but only if roboticists can determine what people want and how they'll react to a flying companion. "Imagine a blimp greeting you at the front of the hardware store, ready to offer assistance," Zhang said. "People are good at reading people's faces and sensing if they need help or not. Robots could do the same. And if you needed help, the blimp could ask, then lead you to the correct aisle, flying above the crowds and out of the way."
News Article | January 19, 2016
In a pair of projects announced this week, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) demonstrated software that allow drones to stop on a dime to make hairpin movements over, under, and around some 26 distinct obstacles in a simulated "forest." One team's video shows a small quadrotor doing donuts and figure-eights through an obstacle course of strings and PVC pipes. Weighing just over an ounce and clocking in at 3 and a half inches from rotor to rotor, the drone can fly through the 10-square-foot space at speeds upwards of 1 meter per second. The team's algorithms, which are available online and were previously used to plan footsteps for CSAIL's Atlas robot at last year's DARPA Robotics Challenge, segment space into "obstacle-free regions" and then link them together to find a single collision-free route. "Rather than plan paths based on the number of obstacles in the environment, it's much more manageable to look at the inverse: the segments of space that are 'free' for the drone to travel through," says recent graduate Benoit Landry '14 MNG '15, who was first author on a related paper just accepted to the IEEE International Conference on Robotics and Automation (ICRA). "Using free-space segments is a more 'glass-half-full' approach that works far better for drones in small, cluttered spaces." In a second CSAIL project, PhD student Anirudha Majumdar showed off a fixed-wing plane that is guaranteed to avoid obstacles without any advanced knowledge of the space, and even in the face of wind gusts and other dynamics. His approach was to pre-program a library of dozens of distinct "funnels" that represent the worst-case behavior of the system, calculated via a rigorous verification algorithm. "As the drone flies, it continuously searches through the library to stitch together a series of paths that are computationally guaranteed to avoid obstacles," says Majumdar, who was lead author on a related technical report. "Many of the individual funnels will not be collision-free, but with a large-enough library you can be certain that your route will be clear." Both papers were co-authored by MIT professor Russ Tedrake; the ICRA paper, which will be presented in May in Sweden, was also co-written by PhD students Robin Deits and Peter R. Florence. A bird might make it seem simple, but flight is a highly complicated endeavor. A flying object can change position in six distinct directions—forward/backward ("surge"), up/down ("heave"), left/right ("sway"), and by rotating front-to-back ("pitch"), side-to-side ("roll"), and horizontally ("yaw"). "At every moment in time there are 12 distinct numbers needed to describe where the system it is and how quickly it is moving, on top of simultaneously tracking other objects in the space that could get in your way," says Majumdar. "Most techniques typically can't handle this sort of complexity in real-time." One common motion-planning approach is to sample the whole space through algorithms like the "rapidly-exploring random tree." Although often effective, sampling-based approaches are generally less efficient and have trouble navigating small gaps between obstacles. Landry's team opted to use Deits' new free-space-based technique, which he calls the "Iterative Regional Inflation by semidefinite programming" algorithm (IRIS). They then coupled IRIS with a "mixed-integer semidefinite program" (MISDP) that assigns specific flight movements to each "space-free region" and then executes the full plan. To sense its surroundings, the drone used motion-capture optical sensors and an on-board inertial measurement unit (IMU) that help estimate the precise positioning of obstacles. "I'm most impressed by the team's ingenious technique of combining on- and off-board sensors to determine the drone's location," says Jingjin Yu, an assistant professor of computer science at Rutgers University. "This is key to the system's ability to create unique routes for each set of obstacles." In its current form, MISDP has been optimized such that it can't do real-time planning; it takes an average of 10 minutes to create a route for the obstacle course. But Landry says that making certain sacrifices would let them generate plans much more quickly. "For example, you could define 'free-space regions' more broadly as links between areas where two or more free-space regions overlap," says Landry. "That would let you solve for a general motion-plan through those links, and then fill in the details with specific paths inside of the chosen regions. Currently we solve both problems at the same time to lower energy consumption, but if we wanted to run plans faster that would be a good option." Majumdar's software, meanwhile, generates more conservative plans, but can do so in real-time. He first developed a library of 40 to 50 trajectories that are each given an outer bound that the drone is guaranteed to remain within. These bounds can be visualized as "funnels" that the planning algorithm chooses between to stitch together a sequence of steps that allow the drone to plan its flying on the fly. A flexible approach like this comes with a high level of guarantees that the software will work, even in the face of uncertainties with both the surroundings and the hardware itself. The algorithm can easily be extended to drones of different sizes and payloads, as well as ground vehicles and walking robots. As for the environment, imagine the drone choosing between making a forceful roll maneuver that will avoid a tree by a large margin, versus flying straight and avoiding a tree by a small amount. "A traditional approach might prefer the first since avoiding obstacles by a significant amount seems 'safer,'" Majumdar says. "But a move like that actually may be riskier because it's more susceptible to wind gusts. Our method makes these decisions in real-time, which is critical if we want drones to move out of the labs and operate in real-world scenarios." CSAIL researchers have been working on this problem for many years. Professor Nick Roy has been honing algorithms for drones to develop maps and avoid objects in real-time; in November a team led by PhD student Andrew Barry published a video demonstrating algorithms that allow a drone to dart between trees at speeds of 30 miles per hour. While these two drones cannot travel quite as fast as Barry's, their maneuvers are generally more complex, meaning that they can navigate in smaller, denser environments. "Enabling dynamic flight of small, off-the-shelf quadcopters is a marvelous achievement, and one that has many potential applications," Yu says. "With additional development, I can imagine these machines being used as probes in hard-to-reach places, from exploring caves to doing search-and-rescue in collapsed buildings." Landry, who now works for 3D Robotics in California, is hopeful that other academics will build on and refine the researchers' work, which is all open-source and available on github. "A big challenge for industry is determining which technologies are actually mature enough to use in real products," Landry says. "The best way to do that is to conduct experiments that focus on all of the corner cases and can demonstrate that algorithms like these will actually work 99.999 percent of the time." More information: Aggressive Quadrotor Flight through Cluttered Environments Using Mixed Integer Programming. groups.csail.mit.edu/robotics-center/public_papers/Landry15b.pdf
News Article | January 13, 2016
ICRA expects wind energy capacity addition during the current fiscal year to grow 20% over the last year to about 2800 MW and will be driven both by the IPP and non-IPP segments. In the rating agency’s view, the demand drivers for the wind energy sector remain favourable in the long run. This is mainly aided by strong policy support in place at the Centre and in key states which have wind potential, favourable regulatory framework in the form of renewable purchase obligation (RPO) regulations, as well as the cost competitiveness of wind-based energy vis-à-vis conventional energy sources. The National Institute of Wind Energy (NIWE), Chennai, India has launched two online maps, one each for wind and solar radiation. The Wind Energy Resources Map of India has been launched at 100 meter above the ground, while the solar radiation map has been set up at ground level on the online Geographic Information System platform.
News Article | February 20, 2017
La Fondation Praemium Erasmianum a attribué le prix Erasmus 2017 à la sociologue culturelle canadienne Michèle Lamont (1957). Elle est professeure de sociologie à l'université Harvard, professeure d'études américaines et afro-américaines et professeure d'études européennes titulaire de la chaire Robert I. Goldman. Elle a reçu le prix pour sa contribution soutenue à la recherche en sciences sociales dans la relation existant entre la connaissance, le pouvoir et la diversité. Mme Lamont a consacré sa carrière universitaire à étudier comment les conditions culturelles façonnent les inégalités et l'exclusion sociale et comment les groupes stigmatisés trouvent des façons de préserver leur dignité et leur estime de soi. Ses intérêts académiques portent essentiellement sur la façon dont la classe et l'ethnicité déterminent la façon dont les gens perçoivent la réalité, et sur la façon dont le bien-être des minorités influe sur le bien-être de la société dans son ensemble. Grâce à une recherche comparative internationale novatrice, elle montre que les groupes défavorisés peuvent atteindre de nouvelles formes d'estime de soi et de respect. En recherchant des formules de réussite, elle examine les facteurs culturels et les structures institutionnelles qui peuvent créer des sociétés plus résilientes. Par ailleurs, elle montre que la diversité conduit souvent à des relations plus fortes et plus productives, tant dans la société que dans le monde universitaire. Mme Lamont tourne également son regard critique vers elle-même, en analysant les idées sur la valeur et la qualité qui soutiennent la formation du jugement dans les sciences sociales. Sa recherche sur les motifs sous-jacents de cette discussion revêt une importance particulière à un moment où l'autorité des intellectuels et leur revendication de la vérité sont de plus en plus contestées. Avec son approche interdisciplinaire, son point de vue critique et sa vision internationale, Mme Lamont fait preuve d'excellence dans la diversité en termes de recherche et société. À ce titre, elle incarne les valeurs érasmiennes que la Fondation chérit et défend. Michèle Lamont est née à Toronto et a grandi à Québec. Après avoir étudié à Ottawa et à Paris, elle a commencé sa carrière universitaire dans les universités de Stanford et de Princeton aux États-Unis, avant de s'installer à l'université de Harvard en 2003. Mme Lamont a écrit des dizaines de livres et d'articles sur des sujets tels que la culture, les inégalités sociales et l'exclusion, le racisme et l'ethnicité, les institutions et la science. Dans son livre le plus récent, « Getting Respect » (2016), elle décrit comment divers groupes stigmatisés réagissent face à l'expérience quotidienne de la discrimination. Son livre précédent, « How Professors Think » (2009) examine la façon dont le monde universitaire détermine ce qu'est une connaissance précieuse. Sociologue de renommée internationale, Mme Lamont a joué un rôle de premier plan en reliant les domaines de recherche européens et américains au sein des sciences sociales. En 2002, elle a cofondé le Programme des sociétés prospères à l'ICRA (Institut canadien de recherches avancées). En 2016, elle a reçu un doctorat honoris causa de l'Université d'Amsterdam. Le prix Erasmus est attribué chaque année à une personne ou une institution qui a apporté une contribution exceptionnelle aux sciences humaines, aux sciences sociales ou aux arts. Sa Majesté le Roi est le parrain de la Fondation. Le prix est récompensé par 150 000 € en espèces. Le prix sera remis en novembre 2017. Conjointement à la présentation du Prix Erasmus, un programme varié d'activités sera organisé autour de Michèle Lamont et du thème « Connaissance, puissance et diversité ».
News Article | December 27, 2016
Consumption of seafood is regarded as healthy since it contains high quality proteins, vitamins and omega-3 polyunsaturated fatty acids. But it might also put us at risk of exposure to environmental pollutants. How much do we know about our eating choices? One answer could come from a personal fish calculator designed by European researchers, to understand how much of our diet is healthy. It is very simple; you must select your age range, the amount and species consumed per week. Have you eaten spaghetti with clams and fried sardines? The device will quickly calculate your exposure to methylmercury and other pollutants. The calculator is the brainchild of the ECSafeSEAFOOD project, which has analysed the prevalence of marine toxins, microplastics and other chemical contaminants of growing concern, found in seafood sold in supermarkets across Europe. Contaminants of emerging concern are substances for which no maximum levels have been laid down in EU legislation nor require revision. "Sensitive, rapid and cost-effective screening methods were validated in a large set of seafood samples. Overall, the levels of contaminants in seafood were low, there are no risks for consumers. The only pollutants that may represent a concern for those who consume a lot of seafood were methylmercury and PBDE99 (industrial contaminants)", says António Marques from the Portuguese Marine and Atmospheric Institute (IPMA) in Lisbon, Portugal. "The exposure to these contaminants through seafood needs to be more finely assessed. Such information is crucial for the European food safety authorities to adjust the legislation", he adds. For example, no limits have been established for methylmercury in food. In the Po estuary in Italy, which is one of the top sites for mollusc farming in Europe, the scientists also found the highest level of pharmaceuticals such as the psychiatric drugs venlafaxine and citalopram, and the antibiotic azithromycin. Other contaminants raising concern are endocrine disrupters (EDCs) which are chemicals that may interfere with the body's hormonal gland system and cause various adverse effects. "Spanish consumers had the highest intake of endocrine disrupting compounds from seafood consumption, though the assessed intake was still below the tolerable weekly intake", explains Sara Rodriguez-Mozaz, researcher working at the Catalan Institute of Water Research (ICRA) in Girona, Spain, "Methylparaben, triclosan and bisphenol A were the most frequently detected EDCs." Other substances investigated by the researchers are microplastics (plastic particles smaller than 5 mm) that may act as a vector for chemical contaminants. The research revealed that up to 36.5% of the fish examined and 83% of crustaceans contained microplastics. The main challenges in the project were related to finding the right analytical methods. "Pharmaceutical and EDCs are found at very low levels in seafood, close to the current limits of detection of conventional analytical methodologies," says Rodriguez-Mozaz. The scientists collated their results in a database focusing "on unregulated contaminants that give rise to concern from an environmental and public health point of view". They invite "policy makers" to use their study "to help inform policy and advisory guidelines" and "authorities to highlight the deficits in seafood contaminant research". However, there is a happy ending. Despite the increase of chemicals in the marine environment, the low levels in seafood so far mean that we can still enjoy seafood during our Christmas holidays without worrying too much. Explore further: New screening, detection and extraction methods for priority contaminants in seafood
News Article | November 11, 2015
Favourable regulatory environment and eased lending is expected to deliver better than expected capacity addition in India’s wind energy sector. Indian rating agency ICRA expects that 2.8 GW of wind energy capacity will be added in the current financial year, between April 2015 and March 2016. This will be 20% higher than the capacity added during the previous financial year, and 16.7% more than the targeted capacity addition of 2.4 GW. During the first six months of the financial year, 933 MW of wind energy capacity has already been added. The capacity addition target forecasted by ICRA is lower than the earlier estimates of the Indian Wind Turbine Manufacturers Association, which had predicted 3.5 GW capacity addition this financial year. The diving forces behind this enhanced capacity are the same policies we have discussed several times already on CleanTechnica. Apart from the renewable purchase obligation, attractive feed-in tariffs, financial benefits like generation-based incentives, and accelerated depreciation, the factors that have changed in favour of the project developers are falling lending rates and capital costs. The most recent of the positive measures implemented by the Government was the introduction of tax incentives for manufacture of wind turbine equipment. Wind energy has been the leading technology in terms of installed capacity for several years now as costs continued to fall due to increased competition among turbine manufacturers which eventually also started providing end-to-end services, including project development, operation, and maintenance. The Indian wind energy sector is expected to reach new heights as the Government recently approved the national offshore wind energy policy. The policy has opened up potentially the entire 13,000 kilometres of the Indian coastline for wind energy projects, with the first auctions of offshore blocks for the projects to take place in January 2016. By March 2022 the Government aims to have an operational wind energy capacity of 60 GW. Several leading project developers have set ambitious goal to set up wind energy projects over the next 5-7 years. The total capacity addition commitment received by the Indian Government from such developers is around 48 GW. Get CleanTechnica’s 1st (completely free) electric car report → “Electric Cars: What Early Adopters & First Followers Want.” Come attend CleanTechnica’s 1st “Cleantech Revolution Tour” event → in Berlin, Germany, April 9–10. Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.
News Article | February 11, 2016
A team of researchers led by Prof. Davide Scaramuzza have developed a way to train drones to follow forest trails in an effort to assist search and rescue missions for lost hikers. According to the research, Prof. Scaramuzza's team figured out a method of machine learning through Deep Neural Networks (DNNs) which enables an unsupervised drone to determine the direction of a path using an on-board camera. The system was created by first setting up a hiker with three cameras that cover about 180 degrees of visual information: one positioned straight ahead, one placed 30 degrees to the left and the other 30 degrees to the right so that there is a slight overlap in the captured video. The hiker was instructed to always look ahead in the direction of the path since the front camera will provide the information for the trail. The raw data (PDF) used was eight hours' worth of footage of approximately 7 kilometers of hiking trail between an altitude of 300 and 1,200 meters. The footages were taken at different times of the day and under different weather conditions. The results were surprising when it was tested because the autonomous quadcopter was able to navigate a completely new trail and stay on course as well as, and sometimes even better, than humans. The same path and test was done with two humans against the drone to determine how effective the DNNs based machine learning was and, on one test, the quadcopter was successful 85.2 percent of the time as opposed to the two people who were accurate 86.5 and 82 percent of the time. A second test with different conditions resulted in the quadcopter being accurate 95 percent of the time when the two people were 91 and 88 percent accurate. Watch the video explanation of the research below. "Now that our drones have learned to recognize and follow forest trails, we must teach them to recognize humans," Prof. Scaramuzza said. A drone that could recognize proper trails and humans will certainly be of great assistance to rescue operations, moreso if it can also detect vital signs like the Lynx 6-A. The research titled "A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots" appeared in the IEEE Robotics and Automation Letters (RA-L) and will be presented during the IEEE International Conference on Robotics and Automation (ICRA'16) in May.
News Article | April 18, 2016
Until now, 3D printing has been limited to various types of solids; however, a new study has shown how to print highly complex hydraulic systems from both solids and liquids that makes it easier to build labs on a chip for medical and pharmaceutical uses, and liquid channels for chemical testing and analysis. In what could be a significant move towards the rapid fabrication of functional machines, such robots also have potential applications in areas such as facilitating disaster relief in dangerous situations. Scientists from the Computer Science and Artificial Intelligence Laboratory at MIT automatically produced 3D printed dynamic robot bodies and parts that needed no previous assembly from a commercially available multi-material 3D inkjet printer based on only a single-step process. Using a 3D printer to produce robots is a viable alternative to doing so by hand, which requires huge effort, or through automation, which has not yet reached the necessary level of sophistication. This “printable hydraulics” approach, which provides a design template that can be tailored for robots of different sizes, shapes and functions, was used to produce a small six-legged robot with a dozen hydraulic pumps embedded within it, only requiring the minimal addition of the electronics and a battery before being operational. As team leader Daniela Rus points out, “3D printing offers a way forward, allowing us to automatically produce complex, functional, hydraulically powered robots that can be put to immediate use”. Such printable robots could also be quickly and cheaply produced, and have less electronic components than standard robots. A paper on their research was recently accepted for the 2016 IEEE International Conference on Robotics and Automation (ICRA). In the technique, the printer deposited individual droplets of material of only 20–30 microns in diameter, by layer from the bottom up, with different materials being deposited in different parts for each layer. A high-intensity UV light then solidified the materials but not the liquids. The printer can use many types of material, although each layer is made up of a photopolymer that is solid and a non-curing material that is liquid. They showcased the technique by 3D printing linear bellows actuators, gear pumps, soft grippers as well as the hexapod robot. The hexapod weighed about 1.5 pounds and was under six inches long, and moved using a single DC motor turning a crankshaft that pumps fluid to the robot’s legs. However, it took 22 hours to print, not long considering its complexity, but the team hopes this can be achieved faster by improving on the engineering and resolution of the printers.
News Article | November 18, 2016
He sees them as helpers in our daily lives—performing tasks like setting the table or assisting with the assembly of your new bookcase. But getting to the point where robots can work in the unstructured environment of our homes (as opposed to industrial settings) would take a major technological leap and a massive coordination of efforts from roboticists around the globe. The living room has been called the last frontier for robots—but first, the robotics community needs some standards that everyone can agree on. Enter a suitcase-sized box containing 77 objects. It contains things like hammers, a cordless drill, a can of Spam and a nine-hole peg test. As ordinary as they may seem, these carefully curated household items could be the future of a new kind of standardization for robotics. Known as the Yale-CMU-Berkeley (YCB) Object and Model Set, the intent is to provide universal benchmarks for labs specializing in robotic manipulation and prosthetics around the world. Dollar, an associate professor of mechanical engineering & materials science, came up with the idea about two years ago. He wants to bring a level of specificity and universality to manipulation tasks in robotics research. For instance, a research paper today might describe a particular task as "robotic hand grasps hammer." Are we talking about a big hammer or a little one? We don't know, and that's a problem if you work in a robotics lab looking to replicate the research. With the YCB Set, everyone's on the same page—in this hypothetical case, by working with the same 23.45-ounce Stanley hammer included in the set. In addition to the objects, the project also provides five examples of manipulation tasks (such as pouring water from a pitcher to a mug, or setting the table) and benchmarks for each. A website for the project also allows other laboratories to expand on these tasks by contributing their own protocols and benchmarks. When laboratories work solely by their own standards and protocols, Dollar said, there's often an unconscious bias toward that lab's particular strengths. Universal standards would provide a more impartial way to evaluate results. The YCB Set arrives at a time when the robotics field has reached a critical point. Robots currently do well in structured environments, such as factory settings, where they perform and repeat a very limited number of tasks. "In a structured environment, a robot sees exactly the same object in exactly the same place," Dollar said. "It's a relatively straightforward thing to get robots to operate in those environments because you just have to program it to do one thing. And you can always program something to do one thing well." But Dollar and other roboticists have something more challenging in mind for their creations. "People in the robotics community today are thinking about robots that can work in daily environments, and in the home," he said. "That's sort of the flip side of assembly lines." Standards have long been a crucial part in the advancement of science. Until the 19th century, the schedules of individual communities were governed by municipal clocks. Today, thanks to globally coordinated time (and increasingly accurate atomic clocks), we have personal GPS systems and driverless cars. For centuries, people used their hands and feet to measure the lengths and heights of things. When things got standardized around the world, the International Committee for Weights and Measures stored metal rods in a climate-controlled vault in Paris, each serving as the standard bearer for a particular unit of measurement. In more recent years, those metal objects have been usurped by even more precise standards based on the speed of light (in which we come back again to atomic clocks and the standardization of time). In a sense, the 77-item box is the robotics equivalent of the Paris vault or the atomic clocks, and may usher in an era when laboratories better communicate to advance the field at a faster pace. It's a critical step, since things get tricky as robots move away from the assembly line. Dollar specializes in robotic manipulation, or grasping. As humans, we often take for granted the complexity of something as seemingly simple as picking up a fork and using it. To build robots that can perform not just one of these tasks, but many, individual labs can no longer work as isolated villages operating on their own measurements. They need a universal standard. That's where the box of 77 items comes into play. The objects are the sort of things you find around the house. Certainly, it's easy enough for a robotics lab to find their own objects to manipulate. But for the research to move forward, the results of that lab's work has to be comparable to other labs. "When we have a new idea for a new component or hand idea, we want to test it out and see how well it works," he said. "With quantitative evaluation, we can see how things stack up compared to other ideas." There have been other attempts to standardize manipulation tests, but Dollar said they don't capture the high level functionality that the YCB Set demonstrates. It's only recently, roboticists say, that such standards would have much purpose. The field simply wasn't sophisticated enough until recent years to benefit from such standardization. It's a different story now, though, as integrated systems require the work of multiple disciplines to create a robot that can do something like put away dishes. "As robots move out of the lab and into the real world, it gets harder to understand their capabilities and limitations," said Robert Howe, the Abbott and James Lawrence Professor of Engineering at Harvard. "In a factory where everything is carefully arranged, you can rigorously test how they work, but in my kitchen I have 20 kinds of coffee mugs. So it's a big puzzle how to characterize and compare robots. The approach that Aaron is taking is a promising one." Howe notes that even a seemingly simple grasping task requires very advanced engineering. You need to plan the hand and the arm so that it doesn't knock over other things, the contacts must be carefully controlled—and then you have to wrap up all these coordinated elements into a single system that works fast enough to be useful. His lab is concerned with tactile sensing, which is one piece of the puzzle, but the same task could also require the input of computer vision specialists. "That's why the YCB Set is clever," he said. Now a lab can score how well they do on a certain task, and other labs can try to match or beat that score. After Dollar had the idea for the standardized set, he brought on board two former colleagues in the robotics community, Dr. Siddhartha Srinivasa from Carnegie-Mellon University and Dr. Pieter Abbeel of UC Berkeley ("These are people I knew I could work well with and make something happen"). And he assigned Berk Calli, a postdoctoral associate in his lab, to take the lead on the project. Calli, who came to Yale in 2014, said the lack of reproducibility in robotics is a problem long recognized among researchers in the field. It's very rare, he said, for a paper to compare just two algorithms from other labs. "If you can get five or 10 groups using one single protocol to compare their algorithms, that would be a huge step," he said. "It will be a huge thing in terms of quantification and comparison in robotics, because this has never been done before." It's gotten to the point, Calli said, that the field doesn't have much choice but to take on the matter of standardization. "There's like a pool of algorithms and no one knows which performs the best. And we cannot proceed further without knowing what is working and what is not." Ideally, the YCB Set will take on a life of its own. The objects and example tasks provided are just a beginning. Manipulation research progresses quickly and covers a wide range of technical interests and research approaches, so the five manipulation tests Dollar and his team provide are only examples of protocols that labs can use with the objects. That's why on the YCB Object and Model Set website, the research team has also provided a framework for other labs to contribute their own manipulation tests and benchmarks. There, researchers can see protocols from other labs and have a forum for discussion. "The main thing is just getting other researchers to propose their own protocols and get people to utilize them," Dollar said. To pick the right objects, the researchers combed through numerous robotics papers to get a sense of what kinds of items were most commonly used in manipulation tests. They visited stores for additional ideas. "The nature of this project is to apply to and span a wide range of research interests," Dollar said. Preference was also given to objects that are durable and likely to remain in circulation without much change in the future. Standard consumer objects were chosen to keep the costs down. Each set costs about $350. The objects are divided into categories. The food group, for example, includes a cereal box, a cylinder of Pringles chips and a can of Spam. Tools range from small nails to wood blocks and a cordless drill. Dollar said he aimed for a wide variety of sizes (the smallest item is a washer, the largest a water pitcher). Some items have simple geometric shapes that are relatively easy to grasp, while the complex shapes of others pose a greater challenge for robotic hands. The items also include various task-based objects: a "box-and-blocks test" in which wooden cubes are to be placed in a box; a toy airplane that can be assembled and disassembled; and a variety of Lego pieces for building structures. The set also comes with a digital timer to measure how quickly certain tasks are performed. Finding all the right parts for the YCB Set is one thing, but for the project to succeed, Dollar needed to convince other labs to adopt it. He and his associates have been busy distributing the sets at international robotics conferences. The YCB Set debuted at the IEEE International Conference on Robotics and Automation (ICRA) in May of 2015. Dollar said the reaction was "very positive" and they received about 50 requests for the sets, which are packaged specifically for easy travel. Researchers can also order the sets and have them shipped to their labs. About 100 robotics labs around the world now have the YCB Set. "We want to get this into as many hands as possible, because that's the only way it's really going to stick," Dollar said. Yu Sun, an associate professor in the Department of Computer Science and Engineering at the University of South Florida, said his lab is "one of the lucky ones" to receive the set. He said the YCB Set was featured in a grasping competition that he organized for the International Conference on Intelligent Robots and Systems in South Korea this October. His own lab has already produced some manipulation data using the objects. "The good thing about using Aaron Dollar's object set is that other people will be able to use our data sets because they have the same objects and they can apply them to their own algorithms," he said. "Robotics deals with physical conditions, and if you can't replicate the physical environment, the data won't be useful."