Saarbrucken, Germany

The Max Planck Institute for Informatics is a research institute in computer science with a focus on algorithms and their applications in a broad sense. It hosts fundamental research as well a research for various application domains . It is part of the Max-Planck-Gesellschaft, Germany's largest society for fundamental research.The research institutes of the Max Planck Society have a national and international reputation as “Centres of Excellence” for pure research. The institute consists of five departments and two research groups: The Algorithms and Complexity Department is headed by Prof. Dr. Kurt Mehlhorn, The Computer Vision and Multimodal Computing Department is headed by Prof. Dr. Bernt Schiele, The Department Computational Biology and Applied Algorithmics is headed by Prof. Dr. Thomas Lengauer, Ph.D. The Computer Graphics Department is headed by Prof. Dr. Hans-Peter Seidel The Databases and Information Systems Department is headed by Prof. Dr. Gerhard Weikum Research Group Automation of Logic is headed by Prof. Dr. Christoph Weidenbach The Independent Research Group Computational Genomics and Epidemiology is headed by Dr. Alice McHardy.Previously, it included the following departments: The Programming Logics Department was headed by Prof. Dr. Harald Ganzinger Members of the institute have received various awards. Professor Kurt Mehlhorn and Professor Hans-Peter Seidel received the Gottfried Wilhelm Leibniz Prize, Professor Kurt Mehlhorn and Professor Thomas Lengauer received the Konrad-Zuse-Medal, and in 2004 Professor Harald Ganzinger received the Herbrand Award.The institute, along with the Max Planck Institute for Software Systems , the German Research Centre for Artificial Intelligence and the entire Computer Science department of Saarland University, is involved in the Internationales Begegnungs- und Forschungszentrum für Informatik.The International Max Planck Research School for Computer Science is the graduate school of the MPII and the MPI-SWS. It was founded in 2000 and offers a fully funded PhD-Program in cooperation with Saarland University. Dean is Prof. Dr. Gerhard Weikum. Wikipedia.


Time filter

Source Type

Bringmann K.,Max Planck Institute for Informatics
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms | Year: 2017

Given a set Z of n positive integers and a target value t, the SubsetSum problem asks whether any subset of Z sums to t. A textbook pseudopolynomial time algorithm by Bellman from 1957 solves Subset Sum in time O(n t). This has been improved to O(n maxZ) by Pisinger [J. Algorithms'99] and recently to Õ( p n t) by Koiliaris and Xu [SODA'17]. Here we present a simple and elegant randomized algorithm running in time Õ(n+t). This improves upon a classic algorithm and is likely to be near-optimal, since it matches conditional lower bounds from SetCover and k-Clique. We then use our new algorithm and additional tricks to improve the best known polynomial space solution from time Õ (n3t) and space Õ (n2) to time O (n t) and spaceO (n log t), assuming the Extended Riemann Hypothesis. Unconditionally, we obtain time O(n t1+) and space Õ (n t) for any constant > 0. Copyright © by SIAM.


News Article | April 17, 2017
Site: www.newscientist.com

We are difficult for computers to understand. Our actions are sufficiently unpredictable that computer vision systems, such as those used in driverless cars, can’t readily make sense of what we’re doing and predict our next moves. Now fake people are helping them to understand real human behaviour. The idea is that videos and images of computer-generated bodies walking, dancing and doing cartwheels could help them learn what to look for. “Recognising what’s going on in images is natural for humans. Getting computers to do the same requires a lot more effort,” says Javier Romero at the Max Planck Institute for Intelligent Systems in Tübingen, Germany. This, he says, is one of the biggest things holding back progress with driverless cars. Using synthetic images to train computers could give them more meaningful information about the human world. At the moment, the best computer vision algorithms are trained using hundreds or thousands of images that have been painstakingly labelled to highlight important features. This is how they learn to distinguish an eye from an arm, for example, or a table from a chair. But there is a limit to how much data can be realistically labelled this way. Ideally, every pixel in every frame of a video would be labelled. “But this would mean instead of creating thousands of annotations, people would have to label millions of things, and that’s just not possible,” says Gül Varol at École Normale Supérieure in Paris, France. So Varol, Romero and their colleagues have generated thousands of videos of “synthetic humans” with realistic body shapes and movement. They walk, they run, they crouch, they dance. They can also move in less expected ways, but they’re always recognisably human – and because the videos are computer-generated, every frame is automatically labelled with all the important information. The team created their synthetic humans using the 3D rendering software Blender, basing their work on existing human figure templates and motion data collected from real people to keep the results realistic. The team then generated animations by randomly selecting a body shape and clothing, and setting the figure in different poses. The background, lighting and viewpoint were also randomly selected. In total, they generated more than 65,000 clips and 6.5 million frames. With all this information, computer systems could learn to recognise patterns in how pixels change from one frame to the next, indicating how people are likely to move. This could help a driverless car tell if a person is walking close by or about to step into the road. As the animations are in 3D, they could also teach systems to recognise depth – which could help a robot learn how to smoothly hand someone an object without accidentally punching them in the stomach. The work will be presented at the Conference on Computer Vision and Pattern Recognition in July. “With synthetic images you can create more unusual body shapes and actions, and you don’t have to label the data, so it’s very appealing,” says Mykhaylo Andriluka at Max Planck Institute for Informatics in Saarbrücken, Germany. He points out that other groups are using graphics from video games like Grand Theft Auto to improve computer vision systems, as these can offer a relatively lifelike simulation of the real world. “There’s been huge advances in the realism of virtual images. We can use this to teach computers to see things,” says Romero.


They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US. Explore further: 'Lab-on-a-glove' could bring nerve-agent detection to a wearer's fingertips


News Article | May 3, 2017
Site: www.gizmag.com

The WatchSense prototype in use – in the final version, the depth sensor would be incorporated into the watch (Credit: Oliver Dietze) Although smartwatches may indeed be getting capable of more and more functions, their touchscreens will have to remain relatively small if they're still going to fit on people's wrists. As a result, we've recently seen attempts at extending the user interface off of the screen. One of the latest, known as WatchSense, allows users to control a mobile device by moving the fingers of one hand on and above the back of the other. The WatchSense concept was developed by researchers from the Max Planck Institute for Informatics, the University of Copenhagen and Aalto University in Finland. In its current proof-of-concept form, it incorporates a small 3D depth sensor which is worn on the forearm. That sensor is able to ascertain the positions of the user's index finger and thumb as they move on the back of the hand that's wearing the watch, as well as in the space above it. Custom software assigns different commands to different movements, allowing users to control various functions on a linked smartphone. Although the sensor is presently separate from the user's smartwatch, the team believes that it will soon be possible to incorporate miniaturized depth sensors directly into watches. In lab tests, it was found that WatchSense allowed users to adjust music volume and select songs more quickly than they could using a smartphone's Android music app. It was also found to be "more satisfactory" than a touchscreen for virtual and augmented reality-based tasks, along with a map application and the control of a large external screen. The technology will be demonstrated at the upcoming Conference on Human Factors in Computing, taking place in Denver, Colorado.


News Article | May 3, 2017
Site: www.eurekalert.org

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


News Article | May 3, 2017
Site: www.rdmag.com

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


News Article | May 3, 2017
Site: www.rdmag.com

It relies on a depth sensor that tracks movements of the thumb and index finger on and above the back of the hand. In this way, not only can smartwatches be controlled, but also smartphones, smart TVs and devices for augmented and virtual reality. They're called the "Apple Watch Series 2", "LG Watch", "Samsung GEAR S3" or "Moto 360 2nd Gen" but they all have the same problem. "Every new product generation has better screens, better processors, better cameras, and new sensors, but regarding input, the limitations remain," explains Srinath Sridhar, a researcher in the Graphics, Vision and Video group at the Max Planck Institute for Informatics. Together with Christian Theobalt, head of the Graphics, Vision and Video group at MPI, Anders Markussen and Sebastian Boring at the University of Copenhagen and Antti Oulasvirta at Aalto University in Finland, Srinath Sridhar has therefore developed an input method that requires only a small camera to track fingertips in mid-air, and touch and position of the fingers on the back of the hand. This combination enables more expressive interactions than any previous sensing technique. Regarding hardware, the prototype, which the researchers have named "WatchSense", requires only a depth sensor, a much smaller version of the well-known "Kinect" game controller from the Xbox 360 video game console. With WatchSense, the depth sensor is worn on the user's forearm, about 20cm from the watch. As a sort of 3D camera, it captures the movements of the thumb and index finger, not only on the back of the hand but also in the space over and above it. The software developed by the researchers recognizes the position and movement of the fingers within the 3D image, allowing the user to control apps on smartphones or other devices. "The currently available depth sensors do not fit inside a smartwatch, but from the trend it's clear that in the near future, smaller depth sensors will be integrated into smartwatches," Sridhar says. But this is not all that's required. According to Sridhar, with their software system the scientists also had to solve the challenges of handling the unevenness of the back of the hand and the fact that the fingers can occlude each other when they are moved. "The most important thing is that we can not only recognize the fingers, but also distinguish between them," explains Sridhar, "which nobody else had managed to do before in a wearable form factor. We can now do this even in real time." The software recognizes the exact positions of the thumb and index finger in the 3D image from the depth sensor, because the researchers trained it to do this via machine learning. In addition, the researchers have successfully tested their prototype in combination with several mobile devices and in various scenarios. "Smartphones can be operated with one or more fingers on the display, but they do not use the space above it. If both are combined, this enables previously impossible forms of interaction," explains Sridhar. He and his colleagues were able to show that with WatchSense, in a music program, the volume could be adjusted and a new song selected more quickly than was possible with a smartphone's Android app. The researchers also tested WatchSense for tasks in virtual and augmented reality, in a map application, and used it to control a large external screen. Preliminary studies showed that WatchSense was more satisfactory for each case than conventional touch-sensitive displays. Sridhar is confident that "we need something like WatchSense whenever we want to be productive while moving. WatchSense is the first to enable expressive input for devices while on the move." From May 6, the researchers will present WatchSense at the renowned "Conference on Human Factors in Computing," or CHI for short, which this time takes place in the city of Denver in the US.


Kratsch S.,University Utrecht | Wahlstrom M.,Max Planck Institute for Informatics
Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS | Year: 2012

The existence of a polynomial kernel for Odd Cycle Transversal was a notorious open problem in parameterized complexity. Recently, this was settled by the present authors (Kratsch and Wahlströom, SODA 2012), with a randomized polynomial kernel for the problem, using matroid theory to encode flow questions over a set of terminals in size polynomial in the number of terminals (rather than the total graph size, which may be superpolynomially larger). In the current work we further establish the usefulness of matroid theory to kernelization by showing applications of a result on representative sets due to Lovász (Combinatorial Surveys 1977) and Marx (TCS 2009). We show how representative sets can be used to give a polynomial kernel for the elusive Almost 2-sat problem (where the task is to remove at most k clauses to make a 2-CNF formula satisfiable), solving a major open problem in kernelization. We further apply the representative sets tool to the problem of finding irrelevant vertices in graph cut problems, that is, vertices which can be made undeletable without affecting the status of the problem. This gives the first significant progress towards a polynomial kernel for the Multiway Cut problem, in particular, we get a polynomial kernel for Multiway Cut instances with a bounded number of terminals. Both these kernelization results have significant spin-off effects, producing the first polynomial kernels for a range of related problems. More generally, the irrelevant vertex results have implications for covering min-cuts in graphs. In particular, given a directed graph and a set of terminals, we can find a set of size polynomial in the number of terminals (a cut-covering set) which contains a minimum vertex cut for every choice of sources and sinks from the terminal set. Similarly, given an undirected graph and a set of terminals, we can find a set of vertices, of size polynomial in the number of terminals, which contains a minimum multiway cut for every partition of the terminals into a bounded number of sets. Both results are polynomial time. We expect this to have further applications, in particular, we get direct, reduction rule-based kernelizations for all problems above, in contrast to the indirect compression-based kernel previously given for Odd Cycle Transversal. All our results are randomized, with failure probabilities which can be made exponentially small in the size of the input, due to needing a representation of a matroid to apply the representative sets tool. © 2012 IEEE.


Bock C.,Austrian Academy of Sciences | Bock C.,Medical University of Vienna | Lengauer T.,Max Planck Institute for Informatics
Nature Reviews Cancer | Year: 2012

Drug resistance is a common cause of treatment failure for HIV infection and cancer. The high mutation rate of HIV leads to genetic heterogeneity among viral populations and provides the seed from which drug-resistant clones emerge in response to therapy. Similarly, most cancers are characterized by extensive genetic, epigenetic, transcriptional and cellular diversity, and drug-resistant cancer cells outgrow their non-resistant peers in a process of somatic evolution. Patient-specific combination of antiviral drugs has emerged as a powerful approach for treating drug-resistant HIV infection, using genotype-based predictions to identify the best matched combination therapy among several hundred possible combinations of HIV drugs. In this Opinion article, we argue that HIV therapy provides a 'blueprint' for designing and validating patient-specific combination therapies in cancer. © 2012 Macmillan Publishers Limited. All rights reserved.


Lawyer G.,Max Planck Institute for Informatics
Scientific Reports | Year: 2015

Centrality measures such as the degree, k-shell, or eigenvalue centrality can identify a network's most influential nodes, but are rarely usefully accurate in quantifying the spreading power of the vast majority of nodes which are not highly influential. The spreading power of all network nodes is better explained by considering, from a continuous-time epidemiological perspective, the distribution of the force of infection each node generates. The resulting metric, the expected force, accurately quantifies node spreading power under all primary epidemiological models across a wide range of archetypical human contact networks. When node power is low, influence is a function of neighbor degree. As power increases, a node's own degree becomes more important. The strength of this relationship is modulated by network structure, being more pronounced in narrow, dense networks typical of social networking and weakening in broader, looser association networks such as the Internet. The expected force can be computed independently for individual nodes, making it applicable for networks whose adjacency matrix is dynamic, not well specified, or overwhelmingly large.

Loading Max Planck Institute for Informatics collaborators
Loading Max Planck Institute for Informatics collaborators