Communications of the ACM | Year: 2016
ACM's annual report for FY15 is presented. The ACM India Education Committee conducted a two-day workshop on computing curricula. The main objective of the workshop was to contrast the success seen by the Indian IT industry with the lack of similar progress in computing education in India, and to bring into focus what can be done to advance the future of computing and to meet the needs of employers. ACM and the Computer Science Teachers Association (CSTA) announced a new award in 2015, called the ACM/CSTA Cutler-Bell Prize in High School Computing recognizing talented high school students in computer science. Source
News Article | September 12, 2016
A new service developed at Binghamton University, State University of New York could improve performance of mobile devices that save data to the cloud. Storage and computing power is limited on mobile devices, making it necessity to store data in the cloud. However, with the myriad of apps from a myriad of developers that use the cloud, the user experience isn't always smooth. Battery life can be taxed due to extended synchronization times and clogged networks when multiple apps are trying to access the cloud all at the same time. "We may be using many different apps developed by different developers that make use of cloud storage services, whereas on PCs we tend to use apps offered by the official providers. This app and developer diversity can cause problems due to a developer's inexperience and/or carelessness," said Yifin Zhang, assistant professor of computer science at Binghamton University's Thomas J. Watson School of Engineering and Applied Science. Zhang and a team of Binghamton University researchers designed and developed StoArranger, a service to intercept, coordinate and optimize requests made by mobile apps and cloud storage services. StoArranger works as a "middleware system," so there is no change to how apps or an iPhone or Android-device run, just improved performance of both the device and the network overall. Essentially, StoArranger takes cloud storage requests--either to upload a file or to open a file for editing--and orders them in the best way to save power, get things completed as quickly as possible and minimize the amount of data used to complete the tasks. Even though the work could affect millions of mobile devices and users-- e.g. Microsoft's cloud computing and storage system Azure had 10 trillion objects stored on its servers as of January 2015--it is only a promising first step in the development of StoArranger, which isn't commercially available. Further research is scheduled for evaluation experiments, and a full paper will be submitted later this year. "We are planning on developing an app for public use," Zhang said. "We are trying to solve problems without changing operating systems or the existing apps, which makes our solution practical and scalable to existing smartphone users." Zhang presented the paper with Binghamton PhD candidates Yongshu Bai and Xin Zhang, both co-authors of the paper, at the proceedings of the seventh ACM SIGOPS Asia-Pacific Workshop on Systems (APSys '16) in Hong Kong in August. "The programming committee thought the work presented is a good demonstration of the negative effects of the way that current cloud storage providers chose to deploy their services," said Zhang. "The solution we proposed could be a practical way to solve the problem."
« Mobileye and Delphi to partner on SAE Level 4/5 automated driving solution for 2019 | Main | Albemarle signs definitive agreement to acquire lithium salts production assets in Asia » Speaking at the Hot Chips conference in Cupertino, California, NVIDIA revealed the architecture and underlying technology of its new Parker processor, which is suited for automotive applications such as self-driving cars and digital cockpits. Hot Chips, a symposium on high performance chips, is sponsored by the IEEE Technical Committee on Microprocessors and Microcomputers in cooperation with ACM SIGARCH. NVIDIA mentioned Parker at CES 2016 earlier this year, when it introduced the NVIDIA DRIVE PX 2 platform (earlier post). That platform uses two Parker processors and two Pascal architecture-based GPUs to power deep learning applications. More than 80 carmakers, tier 1 suppliers and university research centers around the world are now using the DRIVE PX 2 system to develop autonomous vehicles. This includes Volvo, which plans to road test DRIVE PX 2 systems in XC90 SUVs next year. Built around NVIDIA’s highest performing and most power-efficient Pascal GPU architecture and the next generation of NVIDIA’s Denver CPU architecture, Parker delivers up to 1.5 teraflops of performance for deep learning-based self-driving AI cockpit systems. Parker delivers 50 to 100 percent higher multi-core CPU performance than other mobile processors due to its CPU architecture consisting of two next-generation 64-bit Denver CPU cores (Denver 2.0) paired with four 64-bit ARM Cortex A57 CPUs. These all work together in a fully coherent heterogeneous multi-processor configuration. The Denver 2.0 CPU is a seven-way superscalar processor supporting the ARM v8 instruction set and implements an improved dynamic code optimization algorithm and additional low-power retention states for better energy efficiency. The two Denver cores and the Cortex A57 CPU complex are interconnected through a proprietary coherent interconnect fabric. A new 256-core Pascal GPU in Parker delivers the performance needed to run advanced deep learning inference algorithms for self-driving capabilities. And it offers the raw graphics performance and features to power multiple high-resolution displays, such as cockpit instrument displays and in-vehicle infotainment panels. Scalability. Working in concert with Pascal-based supercomputers in the cloud, Parker-based self-driving cars can be continually updated with newer algorithms and information to improve self-driving accuracy and safety. Parker includes hardware-enabled virtualization that supports up to eight virtual machines. Virtualization enables carmakers to use a single Parker-based DRIVE PX 2 system to concurrently host multiple systems, such as in-vehicle infotainment systems, digital instrument clusters and driver assistance systems. Parker is also a scalable architecture. Automakers can use a single unit for highly efficient systems. Or they can integrate it into more complex designs, such as NVIDIA DRIVE PX 2, which employs two Parker chips along with two discrete Pascal GPU cores. DRIVE PX 2 delivers an unprecedented 24 trillion deep learning operations per second to run the most complex deep learning-based inference algorithms. Such systems deliver the supercomputer level of performance that self-driving cars need to safely navigate through all kinds of driving environments. Parker specifications. To address the needs of the automotive market, Parker includes features such as a dual-CAN (controller area network) interface to connect to the numerous electronic control units in the modern car, and Gigabit Ethernet to transport audio and video streams. Compliance with ISO 26262 is achieved through a number of safety features implemented in hardware, such as a safety engine that includes a dedicated dual-lockstep processor for reliable fault detection and processing. Parker is architected to support both decode and encode of video streams up to 4K resolution at 60 frames per second. This will enable automakers to use higher resolution in-vehicle cameras for accurate object detection, and 4K display panels to enhance in-vehicle entertainment experiences.
A new Georgia Tech study finds that Instagram's decision to ban certain words commonly used by pro-eating disorder (pro-ED) communities has produced an unintended effect. The use of those terms decreased when they were censored in 2012. But users adapted by simply making up new, almost identical words, driving up participation and support within pro-ED groups by as much as 30 percent. The Georgia Tech researchers found that these communities are still very active and thriving despite Instagram's efforts to moderate discussion of the dangerous lifestyle. People in pro-ED communities share content, and provide advice and support for those who choose eating disorders, such as anorexia or bulimia, as acceptable and reasonable ways of living. They use specific hashtags to form very connected groups, often using anonymous names to keep their lifestyle choice a secret from the families and friends. Instagram banned some of the most common pro-ED tags four years ago. People can still post these censored terms, but the words no longer show up in search results. Banned examples include "thighgap," "thinspiration" and "secretsociety." Other pro-ED words received advisories. They can be searched, but notifications about graphic content were added, along with public service links for people looking for help. The Georgia Tech researchers looked at 2.5 million pro-ED posts from 2011 to 2014 to study how the community reacted to Instagram's content moderation. "People pretty much stopped using the banned terms, but they gamed the system to stay in touch," said Stevie Chancellor, a doctoral student who led the study. "'Thinspiration' was replaced by 'thynspiration' and 'thynspo.' 'Thighgap' became 'thightgap' and 'thygap.'" The 17 moderated terms morphed into hundreds of similar, new words. Each had an average of 40 variables. Some had more: the researchers found 107 variables of "thighgap." "Likes and comments on these new tags were 15 to 30 percent higher compared to the originals," said Munmun De Choudury, assistant professor in Georgia Tech's School of Interactive Computing. "Before the ban, a person searching for hashtags would only find their intended word. Now a search produces dozens of similar, non-censored pro-ED terms. That means more content to view and engage with." The team also found that the content on these so-called lexical variants discussed self-harm, isolation and thoughts of suicide more often than the larger community of sufferers of eating disorders. Instagram has also blacklisted words related to sex, racism and self-harm. What is more effective than banning tags? The Georgia Tech team suggests a few alternatives. "Allow them to be searchable. But once they're selected, the landing page could include links for help organizations," said Chancellor. "Maybe the search algorithms could be tweaked. Instead of similar terms being displayed, Instagram could introduce recovery-related terms in the search box." The study, "#thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities," was presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing on March 1 in San Francisco. Explore further: User revolt causes Instagram to keep old rules
The scientists from the Max Planck Institute for Informatics and Saarland University will present the new technology at the Cebit Computer Fair in Hannover from 14 to 18 March (Hall 6, Stand D 28). When Brad Pitt lives his life backwards in the film "The Curious Case of Benjamin Button" and morphs from an old man to a small child, it wasn't just a matter of using a lot of make-up. Every single scene was edited on the computer in order to animate Brad Pitt's face extremely realistically and in a way that was appropriate to his age. "Sometimes they take several weeks in the big film studios to work on scenes five seconds long in order to reproduce an actor's appearance and the proportions of their face and body in photo-realistic quality. A lot of the touching-up on the computer is still done by hand", says Christian Theobalt, Leader of the "Graphics, Vision and Video" Group at the Max Planck Institute in Saarbruecken and Professor of Informatics at Saarland University. Film-makers use the same technology to insert fantasy figures such as zombies, orks or fauns into films and give them sad expressions or magic laugh lines around their eyes. Together with his research group, Christian Theobalt now wants to significantly speed up the process. "One challenge is that we perceive actors' facial expressions very precisely and we notice immediately if a single blink doesn't look authentic or the mouth doesn't open in time to the words spoken in the scene", Theobalt explains. To animate a face in complete detail, an exact three-dimensional model of the face is required, referred to as a face rig in the industry jargon. The lighting and reflections of the scene are also incorporated. The face model can be given different expressions in a mathematical process. "We can generate this face rig entirely on the basis of recordings made by a single standard video camera. We use mathematical methods to estimate the parameters needed to record all the details of the face rig. They not only include the facial geometry, meaning the shape of the surfaces, but also the reflective characteristics and lighting of the scene", the computer scientist elaborates. These details were sufficient for their method to faithfully reconstruct an individual face on the computer and, for example, to animate it naturally with laugh lines. "As a model of the face, it works like a complete face rig which we can give various expressions by modifying its parameters", says Theobalt. The algorithm developed by his team already extracts information on numerous expressions which show different emotions. "This means we can decide at the computer whether the actor or avatar is to look happy or contemplative and we can give them a level of detail in their facial expression which wasn't there when the scene was shot", says the researcher from Saarbruecken. To date, special effects companies working in the film industry have expended a great deal of effort to achieve the same result. "Today the proportions of a face are reconstructed with the aid of scanners and multi-camera systems. To this end, you often need complicated specially controlled lighting setups", as Pablo Garrido, one of Christian Theobalt's PhD students at Saarland University explains. Precisely such a system was recently set up in the White House to produce a 3D model for a bust of Barack Obama. This could have been accomplished far more easily with the Saarbruecken technology. "With previous methods, you also needed precisely choreographed facial movements, in other words shots of the particular actor showing pleasure, anger or annoyance in their faces, for example", Garrido explains. The researchers from Saarbruecken recently themselves demonstrated how 3D face models can be generated with a video or depth camera, also in real time. However, these other models are nothing like as detailed as the ones produced by this new method. "We can work with any output from a normal video camera. Even an old recording where you can see a conversation, for example, is enough for us to model the face precisely and animate it", the computer scientist states. You can even use the reconstructed model to fit the movements of an actor's mouth in a dubbed film to the new words spoken. Technology is improving communication with and through avatars The technique is not only of interest to the film industry but can also help to give avatars in the virtual world, your personal assistant on the net or virtual interlocutors in future telepresence applications, a realistic, personal face. "Our technology can ensure that people feel more at ease when communicating with and through avatars", says Theobalt. To achieve this photo-realistic facial reconstruction, the researcher and his team had to solve demanding scientific problems at the intersection of computer graphics and computer vision. The underlying methods for measuring deformable surfaces from monocular video can also be used in other areas, for example in robotics, autonomous systems or measurements in mechanical engineering. Pablo Garrido and Christian Theobalt, together with their co-authors Michael Zollhoefer, Dan Casas, Levi Valgaerts, Kiran Varanasi and Patrick Perez will present the results of their research in the most important specialist publication for computer graphics (ACM Transactions on Graphics) and at Siggraph 2016. Between 14 and 18 March, the scientists will showcase the technology at CeBIT in Hannover on the Saarland stand (Hall 6, Stand D 28). Theobalt's working group has also spawned the start-up company The Captury. This company has developed a technique for real-time marker-less full-body motion capture from multi-view video of general scenes. It thus gets rid of the special marker suits needed in previous motion capture systems. This technology is being used in computer animation but also in medicine, ergonomic research, sports science and in the factory of the future where the interacting movements of industrial workers and robots have to be recorded. For this technology, The Captury won one of the main prizes in the start-up competition, IKT Innovativ, at CeBIT 2013. Explore further: Capturing movements of actors and athletes in real time with conventional video cameras