Entity

Time filter

Source Type

San Mateo, CA, United States

Grant
Agency: Cordis | Branch: H2020 | Program: IA | Phase: ICT-01-2014 | Award Amount: 4.93M | Year: 2015

Vision, our richest sensor, allows inferring big data from reality. Arguably, to be smart everywhere we will need to have eyes everywhere. Coupled with advances in artificial vision, the possibilities are endless in terms of wearable applications, augmented reality, surveillance, ambient-assisted living, etc. Currently, computer vision is rapidly moving beyond academic research and factory automation. On the other hand, mass-market mobile devices owe much of their success to their impressing imaging capabilities, so the question arises if such devices could be used as eyes everywhere. Vision is the most demanding sensor in terms of power consumption and required processing power and, in this respect, existing mass-consumer mobile devices have three problems: 1) power consumption precludes their always-on capability, 2) they would have unused sensors for most vision-based applications and 3) since they have been designed for a definite purpose (i.e. as cell phones, PDAs and readers) people will not consistently use them for other purposes. Our objective in this project is to build an optimized core vision platform that can work independently and also embedded into all types of artefacts. The envisioned open hardware must be combined with carefully designed APIs that maximize inferred information per milliwatt and adapt the quality of inferred results to each particular application. This will not only mean more hours of continuous operation, it will allow to create novel applications and services that go beyond what current vision systems can do, which are either personal/mobile or always-on but not both at the same time. Thus, the Eyes of Things project aims at developing a ground-breaking platform that combines: a) a need for more intelligence in future embedded systems, b) computer vision moving rapidly beyond academic research and factory automation and c) the phenomenal technological advances in mobile processing power.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.3.6 | Award Amount: 3.44M | Year: 2010

The emergence of highly parallel, heterogeneous, often incompatible and highly diverse, many-core processors poses major challenges to the European software-intensive industry. It is imperative that such architectures can be fully exploited without starting from scratch with each new design. In particular, there is an urgent need for techniques for efficient, productive and portable programming of heterogeneous many-cores.\nPEPPHER will provide a unified framework for programming architecturally diverse, heterogeneous many-core processors to ensure performance portability. PEPPHER will advance state-of-the-art in its five technical work areas:\n(1)\tMethods and tools for component based software; (2) Portable compilation techniques; (3) Data structures and adaptive, autotuned algorithms; (4) Efficient, flexible run-time systems; and (5) Hardware support for autotuning, synchronization and scheduling.\nPEPPHER is unique in proposing direct compilation to the target architectures. Portability is supported by powerful composition methods and a toolbox of adaptive algorithms. Heterogeneity is further managed by efficient run-time schedulers. The PEPPHER framework will thus ensure that applications execute with maximum efficiency on each supported platform.\nPEPPHER is driven by challenging benchmarks from the industrial partners. Results will be widely disseminated through high-quality publications, workshops and summer-schools, and an edited volume of major results. Techniques and software prototypes will be exploited by the industrial partners. A project website (www.peppher.eu) gives continuity to the dissemination effort.\nThe PEPPHER consortium unites Europes leading experts and consists of world-class research centres and universities (INRIA, Chalmers, LIU, KIT, TUW, UNIVIE), a major company (Intel) and European multi-core SMEs (Codeplay and Movidius), and has the required expertise to accomplish the ambitious but realistic goals of PEPPHER.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2013.3.4 | Award Amount: 3.31M | Year: 2013

The EXCESS project aims at providing radically new energy execution models forming foundations for energy-efficient computing paradigms that will enable two orders of magnitude improvements in energy efficiency for computing systems. A holistic approach that involves both hardware and software aspects together has the best chances to successfully address the energy efficiency problem and discover innovative solutions. EXCESS proposed models will try to describe and bridge embedded processors models with general purpose ones. EXCESS will take a holistic approach and will introduce novel programming methodologies to drastically simplify the development of energy-aware applications that will be energy-portable in a wide range of computing systems while preserving relevant aspects of performance.\nThe EXCESS project is going to be driven by the following technical components that are going to be developed during EXCESS:\n Complete software stacks (including programming models, libraries/algorithms and runtimes) for energy- efficient computing.\n Uniform, generic development methodology and prototype software tools that enable leveraging additional optimisation opportunities for energy-efficient computing by coordinating optimisation knobs at the different levels of the system stack, enabled by appropriate modelling abstractions at each level.\n Configurable energy-aware simulation systems for future energy-efficient architectures.\n\nThe EXCES consortium unites Europes leading experts in both high-performance computing and embedded computing. The consortium consists of world-class research centres and universities (Chalmers, LIU, UiT), a high performance computing centre (HLRS at USTUTT), and a European embedded multi-core SME (Movidius), and has the required expertise to accomplish the ambitious but realistic goals of EXCESS.


News Article | September 8, 2016
Site: www.fastcompany.com

IPhone owners will get an upgrade on September 13 that allows them to find a picture of nearly anyone or anything, anywhere and from any time. Neural network artificial intelligence in the new iOS 10 performs 11 billion calculations in a tenth of a second on each photo snapped to figure out who people are and even what mood they're in. iOS 10's new Photos app is only the latest example of a growing movement toward handheld artificial intelligence. Aipoly, an app released in January, recognizes objects and speaks their names aloud to blind people. Google Translate can replace text in one language with another language as soon as you point your camera at it. All this happens even if you can't get cell reception. Just as "the cloud" was becoming the answer to every "How does it work?" question, smartphones have started clawing back their independence, performing on their own tasks that used to require a tether to a server farm. The result is a more natural AI experience, without the annoying or creepy lag of an internet connection to a data center. "If I said, 'Hey Siri, what is this?' it would take two seconds to send a picture to the cloud [and get a response back]," says Aipoly Such instantaneous AI will take augmented reality far beyond Pokémon Go by accurately mapping the surroundings in detail to insert rich 3D objects, characters, and animations into the video feed on a phone or tablet screen. Likewise, virtual reality will start looking more genuine with mobile AI, according to Gary Brotman, director of product management at mobile chip maker Qualcomm, where he heads the machine learning platform. "To do that right, everything has to be totally realtime," he says. "So you have to be able to present the video, present the audio, and have the intelligence that drives the eye-tracking, the head tracking, gesture tracking, and spatial audio tracking so you can map the acoustics of the room to that virtual experience." AI will also drive convenience features. You might see virtual assistants that use the phone's camera to recognize where you are, such as a specific street or the inside of a restaurant, and bring up relevant apps, says Rizzoli. And for once, such hyper-conveniences may not have the creep factor. If future AI doesn't need the cloud, then the cloud doesn't need your personal data. "For privacy reasons, for latency reasons, for a variety of others, there's no reason why the locus of control for analytics and intelligence shouldn’t be on the [phone]," says Brotman. What's brought the power of AI to handhelds? Video games. "People want better mobile games on their phones and their iPads," says Rizzoli. "So Apple has been particularly good, and so has Qualcomm and the other chip manufacturers, in making better performance." That's pushed the development of more powerful mobile CPUs and Graphics Processing Units. While CPUs mostly work sequentially, GPUs work in parallel on the simpler but very numerous tasks required for quickly rendering 3D graphics. AI also requires performing multiple straightforward duties in tandem. Take what's called a "convolutional neural network"—a staple of modern image recognition. Modeled on how the brain's visual cortex works, CNNs divide the visual field into overlapping tiles and then, in tandem, filter out simple details such as edges from all of these tiles. That info goes to another layer of neurons (biological in humans or virtual in software), which might combine edges into lines; another layer might recognize primitive shapes. Each layer (of which their may be dozens) further refines the perception of the image. "You're looking at a photo, and you're identifying various elements of it at the same time," says Rizzoli. "You're identifying edges, and you're identifying what shapes [might exist]. And all of this can be done in parallel." Smartphone chips have been up to the challenge for a few years. Even 2013's iPhone 5s supports the new people, scene, and object recognition upgrade in iOS 10; and Aipoly is working on versions that will run on the iPhone 5 as well as Android phones going back several years. But programmers have just recently been taking advantage of this power. Photo effects app Prisma, which launched in June, has been one of the early adopters. Twenty-five-year-old Aleksey Moiseenkov created the app, which renders a smartphone photo in a choice of over 30 artistic styles, such as "The Scream," "Mondrian," and many with playful titles like "Illegal Beauty," "Flame Thrower," and "#GetUrban." The effects are rendered nearly instantaneously, which belies their complexity. An Instagram filter simply tweaks basic parameters like color, contrast, brightness, or white balance in pre-set amounts. Prisma has to analyze the image, recognize elements like shapes, lines, colors, and shading, and redraw it from scratch like Edvard Munch or Piet Mondrian might. The results can be gorgeous, making even a banal picture intriguing. Prisma initially did all this work in the cloud, but Moiseenkov says that could hurt the quality of the app. "We have a lot of users in Asia," he says, "and we need to give the same experience no matter what the connection is, wherever the server or cloud processing is." A new version of the iPhone app that came out in August runs entirely on the phone, and Moiseenkov is working on the same shift for the Android version. An upgrade that applies the same artistic effects to video is coming, too, possibly in September, says Moiseenkov. "Video is much more complicated in terms of overloading the servers and all that stuff," he says, so doing effects right on the phone is crucial. Moiseenkov and his team had to figure out from scratch how to get their AI software working on smartphones, but future programmers might have an easier time. In May, Qualcomm came out with a software developer's kit called the "Neural Processing Engine" for its Snapdragon 820 chip, which powers 2016's high-end Android phones, like the Samsung Galaxy S7 and Note 7, Moto Z and Z Force, OnePlus 3, HTC 10, and LG G5. The software juggles tasks between the CPU, GPU, and other components of the chip for jobs like scene detection, text recognition, face recognition, and natural language processing (understanding conversational speech rather than just rigid commands). Specialized AI chips are also arriving. A company called Movidius makes VPUs—vision processing units—optimized for computer vision neural networks. (This week, chip giant Intel agreed to acquire it.) Its latest Myriad 2 chip runs in DJI's Phantom 4 drone, for instance, helping it spot and avoid objects, hover in place, and track moving subjects like cyclists or skiers. Using only about one watt of power, the Myriad 2 would be thrifty enough to run in a phone, too. Movidius has made a few vague statements about future products. In June, it announced "a strategic partnership to provide advanced vision processing technology to a variety of VR-centric Lenovo products." These could be VR headsets or phones, or both. Back in January, Movidius and Google announced a collaboration "to accelerate the adoption of deep learning within mobile devices." I asked Movidius and Google about the deals, but they wouldn't say anything more. Apple has been characteristically coy about its AI plans, not saying much before it previewed iOS 10 in June. The AI-powered Photos app is the biggest component. It uses neural networks for the deep learning process that recognizes scenes, objects, and faces in pictures to group them and make them searchable. Its Memories feature puts together collections of pictures and videos based on people in them, places or what it judges to be a significant event, like a trip. Doing all this work on the phone keeps personal information private, says Apple. Neural networks also power Apple's predictive typing that helps finish sentences, and AI appeared well before iOS 10. Apple switched Siri over to a neural network running on the phone back in July 2014 to improve its speech recognition. Siri is how most app makers will plug into the iPhone's AI, for now. Apple hasn't released AI-programming tools for its A series chips the way Qualcomm has for Snapdragon, but a feature called SiriKit lets developers connect their apps so people can interact with the apps by chatting with Apple's virtual assistant. And Apple may not be far behind Qualcomm in helping third-party developers leverage AI. It recently spent a reported $200 million on a company called Turi that makes AI tools for programmers. Developers will have more power to work with, too. The new A10 Fusion chip in the iPhone 7 and 7 Plus has a CPU that runs 40% faster than in the previous generation iPhones, and the graphics run 50% faster. As artificial intelligence continues expanding across the tech world, it seems destined to grow on phones, too. Expectations are rising that gadgets will simply know what we want and what we mean. "I can say that the large percentage of mobile apps will become AI apps," says Nardo Manaloto, an AI engineer and consultant focused on health care apps like virtual medical assistants. Alberto Rizzoli expects to see a lot of new apps by CES in January. App creators "will follow…when more tools for deep-learning software become available, and the developers themselves become more aware of it," he says. "It's still considered to be a dark magic by a lot of computer science people…And it's not."


News Article | April 14, 2015
Site: www.bloomberg.com

Movidius Ltd., an image-processing startup that was founded in Ireland and now works closely with Google Inc., raised $40 million in funding. The financing round values the company at more than $100 million, according to a person with knowledge of the deal, who asked not to be named because the details aren’t being disclosed. Summit Bridge Capital led the round, along with ARCH Venture Partners and Sunny Optical Technology Group. Movidius has raised a total of more than $85 million from investors such as Atlantic Bridge Capital and DFJ Esprit. The San Mateo, California-based startup, which also has offices in Dublin and Timisoara, Romania, has developed vision-processing chips, software and other technology that can work with drones, augmented-reality headsets and a variety of connected devices. “There’s hardly a major company that I can think of in the industry that’s working on computer vision that’s not working with this company,” said investor Brian Long, a partner at Atlantic Bridge Capital. He declined to name particular partners besides Google. At Google, Movidius is playing a role in the company’s computer-vision project, called Project Tango. Movidius plans to focus on developing new software to make it easy for companies to take advantage of its vision processor. A spokeswoman for Movidius declined to comment on the valuation.

Discover hidden collaborations