Time filter

Source Type

New Delhi, India

Fleischer L.,Computer Science
Workshop on Analytic Algorithmics and Combinatorics 2010, ANALCO 2010 | Year: 2010

Inspired by problems in data center scheduling, we study the submodularity of certain scheduling problems as a function of the set of machine capacities and the corresponding implications. In particular, we • give a short proof that, as a function of the excess vector, maximum generalized flow is submodular and minimum cost generalized flow is supermodular; • extend Wolsey's approximation guarantees for submodular covering problems to a new class of problems we call supermodular packing problems; • use these results to get tighter approximation guarantees for several data center scheduling problems. © Copyright (2010) by SIAM: Society for Industrial and Applied Mathematics. All rights reserved. Source

News Article
Site: http://www.scientificcomputing.com/rss-feeds/all/rss.xml/all

MIT researchers have developed a low-power chip for processing 3-D camera data that could help visually impaired people navigate their environments. The chip consumes only one-thousandth as much power as a conventional computer processor executing the same algorithms. Using their chip, the researchers also built a prototype of a complete navigation system for the visually impaired. About the size of a binoculars case and similarly worn around the neck, the system uses an experimental 3-D camera from Texas Instruments. The user carries a mechanical Braille interface developed at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), which conveys information about the distance to the nearest obstacle in the direction the user is moving. The researchers reported the new chip and the prototype navigation system in a paper presented at the International Solid-State Circuits Conference in San Francisco. “There was some prior work on this type of system, but the problem was that the systems were too bulky, because they require tons of different processing,” says Dongsuk Jeon, a postdoc at MIT’s Microsystems Research Laboratories (MTL) at the time the work was done and who joined the faculty of Seoul National University in South Korea this year. “We wanted to miniaturize this system and realized that it is critical to make a very tiny chip that saves power but still provides enough computational power.” Jeon is the first author on the new paper, and he’s joined by Anantha Chandrakasan, the Vannevar Bush Professor of electrical engineering and computer science; Daniela Rus, the Andrew and Erna Viterbi professor of electrical engineering and computer science; Priyanka Raina, a graduate student in electrical engineering and computer science; Nathan Ickes, a former research scientist at MTL who’s now at Apple Computer; and Hsueh-Cheng Wang, a postdoc at CSAIL when the work was done who will join the National Chiao Tung University in Taiwan as an assistant professor this month. In work sponsored by the Andrea Bocelli Foundation, which was founded by the blind singer Andrea Bocelli, Rus’ group had developed an algorithm for converting 3-D camera data into useful navigation aids. The output of any 3-D camera can be converted into a 3-D representation called a “point cloud,” which depicts the spatial locations of individual points on the surfaces of objects. The Rus group’s algorithm clustered points together to identify flat surfaces in the scene, then measured the unobstructed walking distance in multiple directions. For the new paper, the researchers modified this algorithm with power conservation in mind. The standard way to identify planes in point clouds, for instance, is to pick a point at random, then look at its immediate neighbors, and determine whether any of them lie in the same plane. If one of them does, the algorithm looks at its neighbors, determining whether any of them lie in the same plane, and so on, gradually expanding the surface. This is computationally efficient, but it requires frequent requests to a chip’s main memory bank. Because the algorithm doesn’t know in advance which direction it will move through the point cloud, it can’t reliably preload the data it will need into its small working-memory bank. Fetching data from main memory, however, is the biggest energy drain in today’s chips, so the MIT researchers modified the standard algorithm. Their algorithm always begins in the upper left-hand corner of the point cloud and scans along the top row, comparing each point only to the neighbor on its left. Then it starts at the leftmost point in the next row down, comparing each point only to the neighbor on its left and to the one directly above it, and repeats this process until it has examined all the points. This enables the chip to load as many rows as will fit into its working memory, without having to go back to main memory. This and similar tricks drastically reduced the chip’s power consumption. But the data-processing chip isn’t the component of the navigation system that consumes the most energy; the 3-D camera is. So, the chip also includes a circuit that quickly and coarsely compares each new frame of data captured by the camera with the one that immediately preceded it. If little changes over successive frames, that’s a good indication that the user is still; the chip sends a signal to the camera, which can lower its frame rate, saving power. Although the prototype navigation system is less obtrusive than its predecessors, it should be possible to miniaturize it even further. Currently, one of its biggest components is a heat dissipation device atop a second chip that converts the camera’s output into a point cloud. Adding the conversion algorithm to the data-processing chip should have a negligible effect on its power consumption but would significantly reduce the size of the system’s electronics. In addition to the Andrea Bocelli Foundation, the work was cosponsored by Texas Instruments, and the prototype chips were manufactured through the Taiwan Semiconductor Manufacturing Company’s University Shuttle Program.

News Article
Site: http://phys.org/technology-news/

An African revolution in computer science When he went to university in South Africa, Riaz Moola came face to face with the huge differences in educational opportunity in his country, particularly in his own subject, Computer Science. Instead of just getting on with his course, Riaz - now a Gates Cambridge Scholar at the University of Cambridge - devised a way of tackling the problem. Inspired by recent MOOC platforms such as Coursera, he created an online course platform adapted to Africa which paired tutors - typically Computer Science graduates - with students trying to learn programming through a low-bandwidth, text-based resource. The platform has now grown to the largest of its kind in Southern Africa, supporting over 8,500 students from six African countries. This platform is driven by Riaz's start-up - Hyperion Development. In late 2015, Google selected Hyperion as the first South African organisation to lead a national initiative to revolutionise the fields of Computer Science and software development in South Africa. Hyperion will work with bodies such as Computing at School, the South African government and the Python Software Foundation to bring computing-related fields in South Africa up to international standards and to accelerate the training and development of a new generation of programmers in Africa. There are many obstacles standing in the way of the growth of software development as a field in South Africa - a country where programming and Computer Science are still often labelled as 'IT'. High dropout rates at schools and universities across the country have contributed to a decline in programming and Computer Science knowledge at a national level. The skills gap continues to impact industry as employers struggle to fill software development roles and align their technologies to international industry standards. It's something Riaz experienced personally when he went to the University of KwaZulu Natal to study for a degree in Computer Science. The vast majority of students doing the same degree as him dropped out after the first semester as a significant number had never used a computer before, but were expected to write code. In 2011 he transferred to the University of Edinburgh to study Machine Learning and Artificial Intelligence, a subject which is not available to undergraduates in South Africa. It made him think that people in his country should be able to take it and his experience studying in the UK made the contrast between educational opportunities in both countries clearer to him. The foundation for Hyperion's online course platform is the simple idea of using Dropbox to link students and tutors. It allows them to exchange text and programming code files without high data costs or reliable internet connectivity. The tutor base was sustained and grown through models borrowed from online communities popularised through massive multiplayer online games such as World of Warcraft. Riaz says: "I thought if I could link Dropbox and the material I had learned at Edinburgh to a community that scales in the same way they do in these online games, we could build an unusual but effective way of tackling an endemic educational problem in the field." Since these modest beginnings, Hyperion has grown from a group of student volunteers to a team of 10 employees hosted in offices in South Africa and The Social Incubator East programme in Cambridge. Hyperion has moved on to launch services that tackle wider issues in the field, such as the Hyperion Careers platform which links software developers to jobs across the country and the Hyperion Hub which hosts software development-related articles from an African perspective. Riaz, who is doing an MPhil in Technology Policy, says: "Every aspiring programmer or computer scientist in South Africa - and Africa - should have access to internationally excellent educational opportunities. Our partnership with Google will allow us to do that and will help to establish the Computer Science Association of South Africa - the first professional body of its kind in the region. This will catalyse the improvement of Computer Science education on a national scale, from a primary school to industry level." Explore further: S.Africa behind other African states in Internet access

News Article | April 21, 2016
Site: http://www.rdmag.com/rss-feeds/all/rss.xml/all

Planning algorithms for teams of robots fall into two categories: centralized algorithms, in which a single computer makes decisions for the whole team, and decentralized algorithms, in which each robot makes its own decisions based on local observations. With centralized algorithms, if the central computer goes offline, the whole system falls apart. Decentralized algorithms handle erratic communication better, but they're harder to design, because each robot is essentially guessing what the others will do. Most research on decentralized algorithms has focused on making collective decision-making more reliable and has deferred the problem of avoiding obstacles in the robots' environment. At the International Conference on Robotics and Automation in May, MIT researchers will present a new, decentralized planning algorithm for teams of robots that factors in not only stationary obstacles, but also moving obstacles. The algorithm also requires significantly less communications bandwidth than existing decentralized algorithms, but preserves strong mathematical guarantees that the robots will avoid collisions. In simulations involving squadrons of minihelicopters, the decentralized algorithm came up with the same flight plans that a centralized version did. The drones generally preserved an approximation of their preferred formation, a square at a fixed altitude -- although to accommodate obstacles the square rotated and the distances between drones contracted. Occasionally, however, the drones would fly single file or assume a formation in which pairs of them flew at different altitudes. "It's a really exciting result because it combines so many challenging goals," says Daniela Rus, the Andrew and Erna Viterbi Professor in MIT's Department of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory, whose group developed the new algorithm. "Your group of robots has a local goal, which is to stay in formation, and a global goal, which is where they want to go or the trajectory along which you want them to move. And you allow them to operate in a world with static obstacles but also unexpected dynamic obstacles, and you have a guarantee that they are going to retain their local and global objectives. They will have to make some deviations, but those deviations are minimal." Rus is joined on the paper by first author Javier Alonso-Mora, a postdoc in Rus' group; Mac Schwager, an assistant professor of aeronautics and astronautics at Stanford University who worked with Rus as an MIT PhD student in mechanical engineering; and Eduardo Montijano, a professor at Centro Universitario de la Defensa in Zaragoza, Spain. In a typical decentralized group planning algorithm, each robot might broadcast its observations of the environment to its teammates, and all the robots would then execute the same planning algorithm, presumably on the basis of the same information. But Rus, Alonso-Mora, and their colleagues found a way to reduce both the computational and communication burdens imposed by consensual planning. The essential idea is that each robot, on the basis of its own observations, maps out an obstacle-free region in its immediate environment and passes that map only to its nearest neighbors. When a robot receives a map from a neighbor, it calculates the intersection of that map with its own and passes that on. This keeps down both the size of the robots' communications -- describing the intersection of 100 maps requires no more data than describing the intersection of two -- and their number, because each robot communicates only with its neighbors. Nonetheless, each robot ends up with a map that reflects all of the obstacles detected by all the team members. The maps have not three dimensions, however, but four -- the fourth being time. This is how the algorithm accounts for moving obstacles. The four-dimensional map describes how a three-dimensional map would have to change to accommodate the obstacle's change of location, over a span of a few seconds. But it does so in a mathematically compact manner. The algorithm does assume that moving obstacles have constant velocity, which will not always be the case in the real world. But each robot updates its map several times a second, a short enough span of time that the velocity of an accelerating object is unlikely to change dramatically. On the basis of its latest map, each robot calculates the trajectory that will maximize both its local goal -- staying in formation -- and its global goal. The researchers are also testing a version of their algorithm on wheeled robots whose goal is to collectively carry an object across a room where human beings are also moving around, as a simulation of an environment in which humans and robots work together.

But that assumption breaks down in the age of big data, now that computer programs more frequently act on just a few data items scattered arbitrarily across huge data sets. Since fetching data from their main memory banks is the major performance bottleneck in today's chips, having to fetch it more frequently can dramatically slow program execution. This week, at the International Conference on Parallel Architectures and Compilation Techniques, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new programming language, called Milk, that lets application developers manage memory more efficiently in programs that deal with scattered data points in large data sets. In tests on several common algorithms, programs written in the new language were four times as fast as those written in existing languages. But the researchers believe that further work will yield even larger gains. The reason that today's big data sets pose problems for existing memory management techniques, explains Saman Amarasinghe, a professor of electrical engineering and computer science, is not so much that they are large as that they are what computer scientists call "sparse." That is, with big data, the scale of the solution does not necessarily increase proportionally with the scale of the problem. "In social settings, we used to look at smaller problems," Amarasinghe says. "If you look at the people in this [CSAIL] building, we're all connected. But if you look at the planet scale, I don't scale my number of friends. The planet has billions of people, but I still have only hundreds of friends. Suddenly you have a very sparse problem." Similarly, Amarasinghe says, an online bookseller with, say, 1,000 customers might like to provide its visitors with a list of its 20 most popular books. It doesn't follow, however, that an online bookseller with a million customers would want to provide its visitors with a list of its 20,000 most popular books. Today's computer chips are not optimized for sparse data—in fact, the reverse is true. Because fetching data from the chip's main memory bank is slow, every core, or processor, in a modern chip has its own "cache," a relatively small, local, high-speed memory bank. Rather than fetching a single data item at a time from main memory, a core will fetch an entire block of data. And that block is selected according to the principle of locality. It's easy to see how the principle of locality works with, say, image processing. If the purpose of a program is to apply a visual filter to an image, and it works on one block of the image at a time, then when a core requests a block, it should receive all the adjacent blocks its cache can hold, so that it can grind away on block after block without fetching any more data. But that approach doesn't work if the algorithm is interested in only 20 books out of the 2 million in an online retailer's database. If it requests the data associated with one book, it's likely that the data associated with the 100 adjacent books will be irrelevant. Going to main memory for a single data item at a time is woefully inefficient. "It's as if, every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge," says Vladimir Kiriansky, a PhD student in electrical engineering and computer science and first author on the new paper. He's joined by Amarasinghe and Yunming Zhang, also a PhD student in electrical engineering and computer science. Milk simply adds a few commands to OpenMP, an extension of languages such as C and Fortran that makes it easier to write code for multicore processors. With Milk, a programmer inserts a couple additional lines of code around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk's compiler—the program that converts high-level code into low-level instructions—then figures out how to manage memory accordingly. With a Milk program, when a core discovers that it needs a piece of data, it doesn't request it—and a cacheful of adjacent data—from main memory. Instead, it adds the data item's address to a list of locally stored addresses. When the list is long enough, all the chip's cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently. That's the high-level description, but the details get more complicated. In fact, most modern computer chips have several different levels of caches, each one larger but also slightly less efficient than the last. The Milk compiler has to keep track of not only a list of memory addresses but also the data stored at those addresses, and it regularly shuffles both around between cache levels. It also has to decide which addresses should be retained because they might be accessed again, and which to discard. Improving the algorithm that choreographs this intricate data ballet is where the researchers see hope for further performance gains. "Many important applications today are data-intensive, but unfortunately, the growing gap in performance between memory and CPU means they do not fully utilize current hardware," says Matei Zaharia, an assistant professor of computer science at Stanford University. "Milk helps to address this gap by optimizing memory access in common programming constructs. The work combines detailed knowledge about the design of memory controllers with knowledge about compilers to implement good optimizations for current hardware."

Discover hidden collaborations