San Mateo, CA, United States
San Mateo, CA, United States

Time filter

Source Type

Research and Markets has announced the addition of the "Virtual Reality for Consumer Markets" report to their offering. Industry players continue fine-tuning their products so as not to muddy the water for all involved. It is believed that these efforts will bear fruit in the coming years, and that combined revenue for head-mounted displays (HMDs), VR accessories, and VR content will increase from $453.6 million in 2015 to $35.0 billion worldwide in 2021, representing a compound annual growth rate (CAGR) of 133%. The year 2016 will be remembered as the debut of consumer virtual reality (VR), with key ambassadors in the form of Facebook/Oculus, HTC/Valve, Sony, Samsung, and a collective community of companies in China planting their stakes in the ground with formidable investments in jumpstarting a new computing platform. After a shaky start, Facebook's Oculus Rift and HTC/Valve's VIVE started selling in the U.S. in 3Q 2016 and are stabilizing their ecosystems and distribution in 4Q 2016, as they are joined by Sony with the debut of PlayStation VR. A number of lessons have been learned since the 1990s when consumer VR last generated this much hype, with huge strides having been made in terms of creating compelling content and a convincing level of immersion. Getting users to experience VR technology firsthand, and therefore truly understand its potential, remains a challenge, but the emergence of low-cost mobile VR solutions is helping. Even so, some industry participants strongly believe that anything requiring the user to wear a cumbersome device will ultimately fail. The stakes are high given the huge amount of money invested in the industry by some of the world's biggest companies. This report provides a comprehensive analysis of the market dynamics, technology issues, and competitive landscape for consumer VR HMDs, accessories, and content. The report features global market forecasts for annual unit shipments and associated revenue during the period from 2014 through 2021, segmented by five world regions. HMDs are segmented into four product types: PC-based devices, console-based devices, all-in-one devices, and mobile VR headsets. VR accessories, such as gamepads and other VR-specific controllers, hand tracking devices, and 360° cameras are also quantitatively analyzed. The content market is segmented into gaming and media. - How large is the market opportunity for consumer VR hardware and content? - How will the market be segmented by product type, content type, and world region? - How will this market grow in the coming years and which factors will drive this growth? - Which factors could inhibit growth during the forecast period? - What are the main technology trends and issues in the consumer VR market? - Who are the leading providers of consumer VR technology and how do their go-to-market strategies differ? 2. Market Issues 2.1. Introduction 2.2. Scope of Study 2.2.1. Consumer VR Hardware Scope 2.3. Market Overview 2.4. Market Trends 2.5. Market Drivers 2.5.1. Immersion Experiences 2.5.2. Games Market 2.5.3. Three-Dimensional User Interface 2.5.4. User Interface Shift to Hands/Gesture Control 2.5.5. Smartphone Upgrades 2.5.6. Personal Computer Upgrades 2.5.7. China 2.5.8. VR Video 2.5.9. Mobile Ecosystem/App Stores 2.5.10. Web VR 2.5.11. Cloud Gaming 2.6. Market Barriers 2.6.1. Cost 2.6.2. Complex, Multi-Element Purchase 2.6.3. Quality of Experience 2.6.3.1. Virtual Reality Sickness 2.6.3.2. Restricted Field of View 2.6.3.3. Tethering 2.6.3.4. Lack of Natural User Input 2.6.3.5. Streaming Challenges 2.6.3.6. Corrective Eyewear 2.6.4. Trial and Error for Early Virtual Reality Applications 2.7. Use Cases 2.7.1. Games 2.7.2. Video Media Content 2.7.3. Social VR 2.7.4. Marketing 2.7.4.1. Retail E-Commerce 2.7.4.2. Residential Buying/Renting 2.7.4.3. Travel 2.7.5. Wellness Self Help 2.7.6. Fitness 2.7.7. Spatial Computing 3. Technology Issues 3.1. Introduction 3.2. Tracking 3.2.1. Inside-Out and Outside-In 3.2.1.1. Simultaneous Location and Mapping and Computer Vision 3.2.2. Eye Tracking 3.2.3. Hand Tracking Solutions 3.2.4. Gesture Control 3.3. Field of View 3.4. Latency Technologies and Virtual Reality Sickness Prevention 3.4.1. Galvanic Vestibular Stimulation 3.4.2. Frame Tearing 3.4.2.1. Oculus Asynchronous Timewarp and Spacewarp 3.4.2.2. VIVE Asynchronous Reprojection 3.4.3. Field of View Restrictors 3.5. Display Technology 3.6. Graphics Processing Units 3.7. Cameras 3.8. Three-Dimensional Audio 3.9. Adaptive Streaming 3.9.1. Bitmovin 3.9.2. Pixvana 3.10. Seated versus Moving Experiences 3.10.1. Wireless Connectivity Technologies 3.10.2. Local Rendering 4. Key Industry Players 4.1. Introduction 4.2. Key Head-Mounted Display and Platform Players 4.2.1. HTC 4.2.2. Facebook 4.2.2.1. Content Initiatives 4.2.2.2. Evolving Head-Mounted Displays and Virtual Reality Experience 4.2.2.3. Social Virtual Reality 4.2.3. Sony 4.2.4. Google 4.2.5. Microsoft 4.2.6. Razer 4.2.7. Starbreeze Studios and Acer 4.2.8. NVIDIA 4.2.9. Sulon Technologies 4.3. Key Enabling Technology Players 4.3.1. VisiSonics 4.3.2. Bitmovin 4.3.3. Pixvana 4.3.4. uSens 4.3.5. Leap Motion 4.3.6. vMocion 4.3.7. Binary VR 4.3.8. Improbable 4.3.9. Movidius 4.3.10. VR Lens Lab 4.4. Other Key Players 4.4.1. Vroom 4.4.2. Alibaba 4.4.3. Amazon 4.4.4. NextVR 4.4.5. Wevr 4.4.6. Baobab Studios 4.4.7. Surreal VR 4.4.8. AltspaceVR 4.4.9. nDreams 4.4.10. Unity Technologies 4.4.11. Machina OBE 4.5. Other Selected Industry Participants 5. Market Forecasts 5.1. Introduction 5.2. Data Collection 5.3. Forecast Methodology 5.3.1. Top-Level Head-Mounted Display Shipments 5.3.2. Virtual Reality Accessories 5.3.3. Average Selling Prices and Revenue 5.4. Virtual Reality Mass Market Penetration Estimates 5.5. Top-Level Annual Virtual Reality Revenue 5.6. Annual Virtual Reality Head-Mounted Display Shipments and Revenue 5.7. Annual Virtual Reality Accessories Shipments and Revenue 5.8. Annual Virtual Reality Content Revenue by Content Type 5.9. Consumer Virtual Reality Market by Region 5.10. Conclusions and Recommendations For more information about this report visit http://www.researchandmarkets.com/research/zh6t3m/virtual_reality


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2009.3.6 | Award Amount: 3.44M | Year: 2010

The emergence of highly parallel, heterogeneous, often incompatible and highly diverse, many-core processors poses major challenges to the European software-intensive industry. It is imperative that such architectures can be fully exploited without starting from scratch with each new design. In particular, there is an urgent need for techniques for efficient, productive and portable programming of heterogeneous many-cores.\nPEPPHER will provide a unified framework for programming architecturally diverse, heterogeneous many-core processors to ensure performance portability. PEPPHER will advance state-of-the-art in its five technical work areas:\n(1)\tMethods and tools for component based software; (2) Portable compilation techniques; (3) Data structures and adaptive, autotuned algorithms; (4) Efficient, flexible run-time systems; and (5) Hardware support for autotuning, synchronization and scheduling.\nPEPPHER is unique in proposing direct compilation to the target architectures. Portability is supported by powerful composition methods and a toolbox of adaptive algorithms. Heterogeneity is further managed by efficient run-time schedulers. The PEPPHER framework will thus ensure that applications execute with maximum efficiency on each supported platform.\nPEPPHER is driven by challenging benchmarks from the industrial partners. Results will be widely disseminated through high-quality publications, workshops and summer-schools, and an edited volume of major results. Techniques and software prototypes will be exploited by the industrial partners. A project website (www.peppher.eu) gives continuity to the dissemination effort.\nThe PEPPHER consortium unites Europes leading experts and consists of world-class research centres and universities (INRIA, Chalmers, LIU, KIT, TUW, UNIVIE), a major company (Intel) and European multi-core SMEs (Codeplay and Movidius), and has the required expertise to accomplish the ambitious but realistic goals of PEPPHER.


Grant
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-01-2014 | Award Amount: 4.93M | Year: 2015

Vision, our richest sensor, allows inferring big data from reality. Arguably, to be smart everywhere we will need to have eyes everywhere. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc. Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as eyes everywhere. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing mass-consumer mobile devices have three problems: 1) power consumption precludes their always-on capability, 2) they would have unused sensors for most vision-based applications and 3) since they have been designed for a definite purpose (i.e. as cell phones, PDAs and readers) people will not consistently use them for other purposes. Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or always-on but not both at the same time. Thus, the Eyes of Things project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: ICT-2013.3.4 | Award Amount: 3.31M | Year: 2013

The EXCESS project aims at providing radically new energy execution models forming foundations for energy-efficient computing paradigms that will enable two orders of magnitude improvements in energy efficiency for computing systems. A holistic approach that involves both hardware and software aspects together has the best chances to successfully address the energy efficiency problem and discover innovative solutions. EXCESS proposed models will try to describe and bridge embedded processors models with general purpose ones. EXCESS will take a holistic approach and will introduce novel programming methodologies to drastically simplify the development of energy-aware applications that will be energy-portable in a wide range of computing systems while preserving relevant aspects of performance.\nThe EXCESS project is going to be driven by the following technical components that are going to be developed during EXCESS:\n Complete software stacks (including programming models, libraries/algorithms and runtimes) for energy- efficient computing.\n Uniform, generic development methodology and prototype software tools that enable leveraging additional optimisation opportunities for energy-efficient computing by coordinating optimisation knobs at the different levels of the system stack, enabled by appropriate modelling abstractions at each level.\n Configurable energy-aware simulation systems for future energy-efficient architectures.\n\nThe EXCES consortium unites Europes leading experts in both high-performance computing and embedded computing. The consortium consists of world-class research centres and universities (Chalmers, LIU, UiT), a high performance computing centre (HLRS at USTUTT), and a European embedded multi-core SME (Movidius), and has the required expertise to accomplish the ambitious but realistic goals of EXCESS.


News Article | December 7, 2016
Site: www.fastcompany.com

In recent years, advanced machine learning techniques have enabled computers to recognize objects in images, understand commands from spoken sentences, and translate written language. But while consumer products like Apple's Siri and Google Translate might operate in real time, actually building the complex mathematical models these tools rely on can take traditional computers large amounts of time, energy, and processing power. As a result, chipmakers like Intel, graphics powerhouse Nvidia, mobile computing kingpin Qualcomm, and a number of startups are racing to develop specialized hardware to make modern deep learning significantly cheaper and faster. The importance of such chips for developing and training new AI algorithms quickly cannot be understated, according to some AI researchers. "Instead of months, it could be days," Nvidia CEO Jen-Hsun Huang said in a November earnings call, discussing the time required to train a computer to do a new task. "It's essentially like having a time machine." While Nvidia is primarily associated with video cards that help gamers play the latest first-person shooters at the highest resolution possible, the company has also been focusing on adapting its graphics processing unit chips, or GPUs, to serious scientific computation and data center number crunching. "In the last 10 years, we’ve actually brought our GPU technology outside of graphics, made it more general purpose," says Ian Buck, vice president and general manager of Nvidia's accelerated computing business unit. Speedily drawing video game graphics and other real-time images relies on GPUs that perform particular types of mathematical calculations, such as matrix multiplications, and handle large quantities of basic computations in parallel. Researchers have found those same characteristics are also useful for other applications of similar math, including running climate simulations and modeling attributes of complex biomolecular structures. And lately, GPUs have proven adept at training deep neural networks, the mathematical structures loosely modeled on the human brain that are the workhorses of modern machine learning. As it happens, they also rely heavily on repeated parallel matrix calculations. "Deep learning is peculiar in that way: It requires lots and lots of dense matrix multiplication," says Naveen Rao, vice president and general manager for artificial intelligence solutions at Intel and founding CEO of Nervana Systems, a machine learning startup acquired by Intel earlier this year. "This is different from a workload that supports a word processor or spreadsheet." The similarities between graphics and AI math operations have given Nvidia a head start among competitors. The company reported that data center revenue more than doubled year-over-year in the quarter ending Oct. 31 to $240 million, partly due to deep learning-related demand. Other GPU makers are also likely excited about the new demand for product, after reports that industrywide GPU sales were declining amid decreasing desktop computer sales. Nvidia dominates the existing GPU market, with more than 70% market share, and its stock price has nearly tripled in the past year as its chips find new applications. In 2012, at the annual ImageNet Large Scale Visual Recognition Challenge (a well known image classification competition), a team used GPU-powered deep learning for the first time, winning the contest and significantly outperforming previous years' winners. "They got what was stuck at sort of a 70% accuracy range up into the 85% [range]," Buck says. GPUs have become standard equipment in data centers for companies working on machine learning. Nvidia boasts that its GPUs are used in cloud-based machine learning services offered by Amazon and Microsoft. But Nvidia and other companies are still working on the next generations of chips that they say will be able to both train deep learning systems and use them to process information more efficiently. Ultimately, the underlying designs of existing GPUs are adapted for graphics, not artificial intelligence, says Nigel Toon, CEO of Graphcore, a machine learning hardware startup with offices in Bristol, in the U.K. GPU limitations lead programmers to structure data in particular ways to most efficiently take advantage of the chips, which Toon says can be hard to do for more complicated data, like sequences of recorded video or speech. Graphcore is developing chips it calls "intelligent processing units" that Toon says are designed from the ground up with deep learning in mind. "And hopefully, what we can do is remove some of those restrictions," he says. Chipmakers say machine learning will benefit from specialized processors with speedy connections between parallel onboard computing cores, fast access to ample memory for storing complex models and data, and mathematical operations optimized for speed over precision. Google revealed in May that its Go computer AlphaGo, which beat the Go world champion, Lee Sedol, earlier this year, was powered by its own custom chips called tensor processing units. And Intel announced in November that it expects to roll out non-GPU chips within the next three years, partially based on technology acquired from Nervana, that could train machine learning models 100 times faster than current GPUs and enable new, more complex algorithms. "A lot of the neural network solutions we see have artifacts of the hardware designs in them," Rao says. These artifacts can include curbs on complexity because of memory and processing speed limits. Intel and its rivals are also preparing for a future in which machine learning models are trained and deployed on portable hardware, not in data centers. That will be essential for devices like self-driving cars, which need to react to what's going on around them and potentially learn from new input faster than they can relay data to the cloud, says Aditya Kaul, a research director at market intelligence firm Tractica. "Over time, you’re going to see that transition from the cloud to the endpoint," he says. That will mean a need for small, energy-efficient computers optimized for machine learning, especially for portable devices. "When you’re wearing a headset, you don’t want to be wearing something with a brick-sized battery on your head or around your belt," says Jack Dashwood, director of marketing communications at the San Mateo, California-based machine learning startup Movidius. That company, which Intel announced plans to acquire in September, provides computer vision-focused processors for devices including drones from Chinese maker DJI. Nvidia, too, continues to release GPUs with increasing levels of support for machine learning-friendly features like fast, low-precision math, and AI platforms geared specifically for next-generation applications like self-driving cars. Electric carmaker Tesla Motors announced in October that all of its vehicles will be equipped with computing systems for autonomous driving, using Nvidia hardware to support neural networks to process camera and radar input. Nvidia also recently announced plans to supply hardware to a National Cancer Institute and Department of Energy initiative to study cancer and potential treatments within the federal Cancer Moonshot project. "[Nvidia] were kind of early to spot this trend around machine learning," Kaul says. "They’re in a very good position to innovate going forward."


News Article | December 9, 2016
Site: www.businesswire.com

SAN JOSE, Calif.--(BUSINESS WIRE)--The Global Semiconductor Alliance (GSA) is proud to announce the award recipients honored at the 2016 GSA Awards Dinner Celebration that took place in Santa Clara, California. Over the past 22 years the awards program has recognized the achievements of semiconductor companies in several categories ranging from outstanding leadership to financial accomplishments, as well as overall respect within the industry. The GSA’s most prestigious award, the Dr. Morris Chang Exemplary Leadership Award, was presented to Mr. Lip-Bu Tan, President and CEO of Cadence Design Systems, Inc. and Founder and Chairman of Walden International. GSA members identified the Most Respected Public Semiconductor Company Award winners by casting ballots for the industry’s most respected companies judging by their products, vision and future opportunities. Winners included the “Most Respected Emerging Public Semiconductor Company Achieving $100 Million to $500 Million in Annual Sales Award” presented to Nordic Semiconductor; “Most Respected Public Semiconductor Company Achieving $500 Million to $1 Billion in Annual Sales Award” awarded to Silicon Labs; “Most Respected Public Semiconductor Company Achieving $1 Billion to $5 Billion in Annual Sales Award” awarded to Analog Devices, Inc.; and “Most Respected Public Semiconductor Company Achieving Greater than $5 Billion in Annual Sales Award” received by NVIDIA Corporation. The “Most Respected Private Company Award” was voted on by GSA membership and presented to Quantenna Communications, Inc. Other winners include “Best Financially Managed Company Achieving up to $1 Billion in Annual Sales Award” presented to Silicon Motion Technology Corporation (Silicon Motion, Inc.) and “Best Financially Managed Semiconductor Company Achieving Greater than $1 Billion in Annual Sales Award” earned by NVIDIA Corporation. Both companies were recognized based on their continued demonstration of the best overall financial performance according to specific financial metrics. GSA’s Private Awards Committee, comprised of venture capitalists and select industry entrepreneurs, chose the “Start-Up to Watch Award” winner by identifying a company that has demonstrated the potential to positively change its market or the industry through the innovative use of semiconductor technology or a new application for semiconductor technology. This year’s winner is Innovium, Inc. As a global organization, the GSA recognizes outstanding companies headquartered in the Europe/Middle East/Africa and Asia-Pacific regions. Chosen by the leadership council of each respective region, award winners are semiconductor companies that demonstrate the most strength when measuring products, vision, leadership and success in the marketplace. The recipient of this year’s “Outstanding Asia-Pacific Semiconductor Company Award” is MediaTek Inc. and the recipient of this year’s “Outstanding EMEA Semiconductor Company Award” is Movidius. Semiconductor financial analyst Quinn Bolton from Needham & Company presented this year’s “Favorite Analyst Semiconductor Company Award” to Microsemi Corporation. The criteria used in selecting this year’s winner included historical, as well as projected data, such as stock price, earnings per share, revenue forecasts and product performance. The Global Semiconductor Alliance (GSA) mission is to support the global semiconductor industry and its partners by offering a comprehensive view of the industry. This enables members to better anticipate market opportunities and industry trends, preparing them for technology and business shifts. It addresses the challenges within the supply chain including IP, EDA/design, wafer manufacturing, test and packaging to enable industry-wide solutions. Providing a platform for meaningful global collaboration through efficient power networking for global semiconductor leaders and their partners, the Alliance identifies and articulates market opportunities, encourages and supports entrepreneurship, and provides members with comprehensive and unique market intelligence. Members include companies throughout the supply chain representing 30 countries across the globe. www.gsaglobal.org


News Article | October 28, 2016
Site: www.wired.com

In less than 12 hours, three different people offered to pay me if I’d spend an hour talking to a stranger on the phone. All three said they’d enjoyed reading an article I’d written about Google building a new computer chip for artificial intelligence, and all three urged me to discuss the story with one of their clients. Each described this client as the manager of a major hedge fund, but wouldn’t say who it was. The requests came from what are called expert networks—research firms that connect investors with people who can help them understand particular markets and provide a competitive edge (sometimes, it seems, through insider information). These expert networks wanted me to explain how Google’s AI processor would affect the chip market. But first, they wanted me to sign a non-disclosure agreement. I declined. These unsolicited, extremely specific, high-pressure requests—which arrived about three week ago—underscore the radical changes underway in the enormously lucrative computer chip market, changes driven by the rise of artificial intelligence. Those hedge fund managers see these changes coming, but aren’t quite sure how they’ll play out. Of course, no one is quite sure how they’ll play out. Today, Internet giants like Google, Facebook, Microsoft, Amazon, and China’s Baidu are exploring a wide range of chip technologies that can drive AI forward, and the choices they make will shift the fortunes of chipmakers like Intel and nVidia. But at this point, even the computer scientists within those online giants don’t know what the future holds. These companies run their online services from data centers packed with thousands of servers, each driven by a chip called a central processing unit, or CPU. But as they embrace a form of AI called deep neural networks, these companies are supplementing CPUs with other processors. Neural networks can learn tasks by analyzing vast amounts of data, including everything from identifing faces and objects in photos to translating between languages, and they require more than just CPU power. And so Google built the Tensor Processing Unit, or TPU. Microsoft is using a processor called a field programmable gate array, or FPGA. Myriad companies employ machines equipped with vast numbers of graphics processing units, or GPUs. And they’re all looking at a new breed of chip that could accelerate AI from inside smartphones and other devices. Any choice these companies make matters, because their online operations are so vast. They buy and operate far more computer hardware than anyone else on Earth, a gap that will only widen with the continued importance of cloud computing.  If Google chooses one processor over another, it can fundamentally shift the chip industry. The TPU poses a threat to companies like Intel and nVidia because Google makes this chip itself. But GPUs also play an enormous role within Google and its ilk, and nVidia is the primary manufacturer of these specialized chips. Meanwhile, Intel has inserted itself into the mix by acquiring Altera, the company that sells all those FPGAs to Microsoft. At $16.7 billion, it was Intel’s largest acquisition ever, which underscores just how much the chip market is changing. But sorting all this out is difficult—in part because neutral networks operate in two stages. The first is the training stage, where a company like Google trains the neural network to perform a given task, like recognizing faces in photos or translating from one language to another. The second is the execution stage, where people like you and me actually use the neural net—where we, say, post a photo of our high school reunion to Facebook and it automatically tags everyone in it. These two stages are quite different, and each requires a different style of processing. Today, GPUs are the best option for training. Chipmakers designed GPUs to render images for games and other highly graphical applications, but in recent years, companies like Google discovered these chips can also provide an energy-efficient means of juggling the mind-boggling array of calculations required to train a neural network. This means they can train more neural nets with less hardware. Microsoft AI researcher XD Huang calls GPUs “the real weapon.” Recently, his team completed a system that can recognize certain conversational speech as well as humans, and it took them about a year. Without GPUs, he says, it would have taken five. After Microsoft published a research paper on this system, he opened a bottle of champagne at the home of Jen-Hsun Huang, the CEO of nVidia. But companies also need chips that can rapidly execute neural networks, a process called inference. Google built the TPU specifically for this. Microsoft uses FPGAs. And Baidu is using GPUs, which aren’t as well suited to inference as they are to training, but can do the job with the right software in place. At the same time, others are building chips to help execute neural networks on smartphones and other devices. IBM is building such a chip, though some wonder how effective it might be. And Intel has agreed to acquire Movidius, a company that is already pushing chips into devices. Intel understands that the market is changing. Four years ago, the chip maker told us it sells more server processors to Google than it sells to all but four other companies—so it sees firsthand how Google and its ilk can shift the chip market. As a result, it’s now placing bets everywhere. Beyond snapping up Altera and Movidius, it has agreed to buy a third AI chip company called Nervana. That makes sense, because the market is only starting to develop. “We’re now at the precipice of the next big wave of growth,” Intel vice president Jason Waxman recently told me, “and that’s going to be driven by artificial intelligence.” The question is where the wave will take us.


News Article | August 26, 2016
Site: phys.org

Energy consumption is one of the key challenges of modern computing, whether for wireless embedded client devices or high performance computing centers. The ability to develop energy efficient software is crucial, as the use of data and data processing keeps increasing in all areas of society. The need for power efficient computing is not only due to the environmental impact. Rather, we need energy efficient computing in order to even deliver on the trends predicted. The EU funded Excess project, which finishes August 31, set out three years ago to take on what the researchers perceived as a lack of holistic, integrated approaches to cover all system layers from hardware to user level software, and the limitations this caused to the exploitation of the existing solutions and their energy efficiency. They initially analyzed where energy-performance is wasted, and based on that knowledge they have developed a framework that should allow for rapid development of energy efficient software production. "When we started this research program there was a clear lack of tools and mathematical models to help the software engineers to program in an energy efficient way, and also to reason abstractly about the power and energy behavior of her software" says Philippas Tsigas, professor in Computer Engineering at Chalmers University of Technology, and project leader of Excess. "The holistic approach of the project involves both hardware and software components together, enabling the programmer to make power-aware architectural decisions early. This allows for larger energy savings than previous approaches, where software power optimization was often applied as a secondary step, after the initial application was written." The Excess project has taken major steps towards providing a set of tools and models to software developers and system designers to allow them to program in an energy efficient way. The tool box spans from fundamentally new energy-saving hardware components, such as the Movidius Myriad platform, to sophisticated efficient libraries and algorithms. Tests run on large data streaming aggregations, a common operation used in real-time data analytics, has shown impressive results. When using the Excess framework, the programmer can provide a 54 times more energy efficient solution compared to a standard implementation on a high-end PC processor. The holistic Excess approach first presents the hardware benefits, using an embedded processor, and then continues to show the best way to split the computations inside the processor, to even further enhance the performance. Movidius, a partner in the Excess project and developers of the Myriad platform of vision processors, has integrated both technology and methodology developed in the project into their standard development kit hardware and software offering. In the embedded processor business, there has been a gradual migration of HPC class features getting deployed on embedded platforms. The rapid development in autonomous vehicles such as cars and drones, driving assist systems, and also the general development of home assist robotics (e.g. vacuum cleaners and lawnmowers) has led to the porting of various computer vision algorithms to embedded platforms. Traditionally these algorithms were developed on high performance desktop computers and HPC systems, making them difficult to re-deploy to embedded systems. Another problem was that the algorithms were not developed with energy efficiency in mind. But the Excess project has enabled and directed the development of tools and software development methods to aid the porting of HPC applications to the embedded environment in an energy efficient way. Explore further: Better software cuts computer energy use


Google's most recent partnership with Movidius will make our smartphones smarter. More specifically, the cameras in our smartphones could soon be equipped with machine learning technology that could help assist the blind and quickly translate foreign signs. Movidius has worked with Google before on one of the Alphabet-owned company's famed projects, Project Tango. Using a mix of cameras and sensors, Movidius' technology in Project Tango allows devices to create three-dimensional maps of indoor spaces. As a result, future smartphones could have the ability to not just know where it is, but know how it's moving through space, too. Though this latest collaboration between Movidius and Google hasn't been branded with a project name yet, it has the potential to equip future devices to also know what they're looking at via the camera. In some way, Google already allows for this ability in Android devices. Google's Photos app can already recognize people and objects in photos. Search "dog", for example, and the app will pull up all the photos of dogs a user has in their Google Photos library; search for "Paris" and a user will see pictures of themselves posing in front of the Eiffel Tower. The Photos app, however, needs to be connected to the Internet to perform these intelligent functions. That's because all of the complex computing to do so have to call back to a distant data center where algorithms do the entire grunt work analyzing our photos and processing our requests. Movidius' tech packs those same machine learning abilities into a small chip that can fit inside the body of a smartphone. Called vision processing units or VPUs, Movidius' Myriad 2 line of these latest VPUs, will give next-generation devices autonomous abilities. Combined with Google's already powerful machine learning infrastructure, Android phones, for example, would be free from the cloud to perform tasks like speech and image recognition without any latency and while cutting down on data usage. As Movidius' chip would already be a part of the device, all of this processing would happen in real-time – no loading times to wait for anymore. Speech and image recognition on Android smartphones is just the beginning, too. Deeper integration into autonomous drones and vehicles could allow for a level of speedy intelligence that's required in such situations. A driverless car can't wait for instructions from the cloud, for example, when an accident could be waiting around the corner. When exactly, however, we'll see this real-time intelligence in real life is unclear. "This collaboration is going to lead to a new generation of devices that Google will be launching. And they will launch in the not-too-distant future," says Movidius' CEO, Remi El-Ouazzane.


News Article | September 6, 2016
Site: www.technologyreview.com

By buying the startup Movidius, Intel hopes to get a piece of the action in computer-vision chips. Having missed out on the mobile chip market, and lagging behind in supplying the hardware for the burgeoning field of AI, Intel wants to acquire its way to the vanguard of the next emerging trend. Its latest move: buying up Movidius, a firm that makes computer vision chips used in drones and smart devices. The self-stated mission of Movidius, one of MIT Technology Review's 50 Smartest Companies of 2016, is to give machines “the power of sight,” a goal they primarily achieve using their “vision processing units.” The chips have already found their way into drones made by DJI, where they are used to sense and avoid obstacles, and Google’s augmented reality system Tango. In a statement, Movidius cited Intel’s RealSense technology as a reason why the deal was a good fit—Intel was already on the path to advanced computer vision using its own 3-D cameras. Josh Walden, a senior vice president at Intel, says software as much as hardware makes Movidius useful to the company. Movidius’s deep-learning algorithms are tailor-made for computer vision, and Walden says Intel sees great promise (read: big bucks) in the realm of devices that can see and make sense of their surroundings. Nvidia, which dominates the market for deep-learning-focused chips, is bound to give all comers a run for their money. But Intel is betting that even though the deep-learning market is still very small compared to the company’s overall revenue, purchasing Movidius gets it in on the ground floor of the next big thing.

Loading Movidius collaborators
Loading Movidius collaborators