Fraunhofer Institute for Computer Graphics Research

Darmstadt, Germany

Fraunhofer Institute for Computer Graphics Research

Darmstadt, Germany

Time filter

Source Type

In addition, AWE is announcing AWEx, its strategy to partner with passionate experts and influencers around the world to further the growing worldwide demand for AR- and VR-centric events. The expansion takes AWE from the U.S., Europe and China to explore local candidates including Mexico City, Lisbon, Adelaide, Rome, Dubai and Tel Aviv. "The growth of the industry is accelerating, and we are facing demand to bring AWE to multiple cities around the world," explained AWE founder, Ori Inbar. "That's why we are proud to announce AWEx. This program will support independent organizers who are passionate about our mission to address humanity's growing pains, such as overpopulation, globalization challenges and inequality.  AR + VR can help drive economic growth, democratize access to knowledge and health, and increase empathy with sustainability." CieAR is unveiling its new AR film platform which enables a unique and immersive filmexperience in real space and time with life-sized 3D virtual characters. www.ciear.ca. Deep Optics is unveiling and showing interactive demos of its tunable lens technology for the AR and VR markets. www.deepoptics.com. Digibit is showcasing the world's first wearable gaming system that transforms your body into a mobile gaming controller. https://mydigibit.com/kickstarter/ Epson is introducing two new Moverio® products – the Epson Moverio BT-350 augmented reality (AR) smart glasses optimized for multi-user "fleet" environments, and the Epson Moverio Pro BT-2200 smart headset for industrial applications. https://epson.com Livemap is unveiling the 3rd pre-serial prototype of its motorcycle smart helmet with HUD based on AR technology. Ride a bike like a fighter pilot! https://livemap.info/ Arloopa is showcasing its ""Take a Photo with the Leopard" environmental information campaign to  raise  public  awareness  on biodiversity  conservation and threatened species. http://arloopa.com/ AnotherWorld VR is previewing KOBOLD, a new kind of creative fiction that blurs the line between cinema and VR gaming. http://anotherworldvr.com Augmania is announcing the launch of its latest release, an out of the box DIY augmented reality authoring tool and its innovative web AR viewer. www.augmania.com. Arvizio is demonstrating its Mixed Reality Studio Suite, a mixed reality platform that extends HoloLens capabilities as a full-fledged collaboration and conferencing tool. https://www.arvizio.io Catchoom, global innovator in AR and image recognition software solutions is debuting the full suite of its new Visual Shopping solutions. catchoom.com/. EcoCarrier is launching PizzAR, a cost efficient way to promote wares and services through AR on the top surface of a Pizza Box. www.ara2z.com Fraunhofer IGD is presenting VisionLib, an AR one-size-fits-all library tracking system designed specifically for enterprise. https://www.igd.fraunhofer.de/en HDBT is showcasing HDBaseT, a one-cable solution for Virtual Reality, supporting high-def audio, video, USB, controls, Ethernet and power over a single cable. www.hdbaset.org/ ICAROS is demonstrating its Active AR product, a fitness device and a game controller that allows users to control their flight path with nothing but their movements. http://www.icaros.net IdentiToy is demonstrating its "machine interface over a display surface" technology, a touch-less, optical triangulation interface completely independent of human touch. http://www.identitoy.com InsiderNavigation is revealing its indoor augmented reality navigation 1.0 tool to enable recording, localization and navigation within buildings. http://insidernavigation.com Joinpad is demoing its remote video streaming and collaboration tool that lets on-field technicians share data and receive support from a remote expert.www.joinpad.net. Kodak is showing off its PIXPR0 Orbit360 4K VR Camera - a minimalist approach to an all-in-one 360° VR camera utilizing two fixed focus lenses in a futuristic body. www.kodakpixpro.com. Kopin is unveiling a reference design with Goertek for a new lightweight, super high-res, OLED-powered head-mounted display. www.kopin.com LC-Tec Displays AB and SKUGGA Technology AB are demonstrating prototypes of the next generation automatically dimmable sunglasses. www.lc-tec.se ManoMotion is releasing a SDK that provides developers and content creators with the tools and know-how to incorporate hand gestures into VR, AR, MR and IoT products and applications. http://manomotion.com/ Meta is unveiling the Meta Workspace, its spatial AR operating environment designed for creativity and collaboration . https://www.metavision.com Wikitude, with Lenovo New Vision, is announcing a collaboration to develop an Augmented human (AH) Cloud. https://www.wikitude.com Moback is demonstrating Perplexigon, a new multiplayer VR physics building game recently released on Steam's Early Access program and now available for purchase. http://perplexigon.com Optinvent is debuting its new Ora-X smart headphones, blending audio and display for an unprecedented wearable entertainment experience. http://www.optinvent.com Quantum Interface is introducing its groundbreaking, truly hands-free User Interface (UI) controls for headworn displays for industrial and enterprise applications. http://quantuminterface.com Re'Flekt is unveiling the first platform to support simultaneous direct publication and model-based tracking for the Microsoft Hololens. https://www.re-flekt.com Scope AR is introducing significant updates to its live support video calling platform, Remote AR, making it the first remote assistance software to enable markerless tracking. http://www.scopear.com . Soap Collective is debuting Atlas: Prologue, an interactive sci-fi adventure short highlighting episodic content and debuting on Oculus Rift + Touch. http://www.thesoapcollective.com Stereolabs is announcing ZED Mini, a pass-through stereo camera accessory that gives VR developers a head-start on making mixed reality content. https://www.stereolabs.com The Future Group is demonstrating its next-generation platform for creating immersive IMR, AR and VR content for TV production, mobile gaming and commercial solutions. https://www.futureuniverse.com Tractica has released a new white paper assessing key use cases for augmented reality in smartphones and tablets, as well as new capabilities for smart glasses. https://www.tractica.com twnkls is showcasing new mobile AR applications and solutions. http://twnkls.com uSens is showing off its new, industry-first single-camera/dual-lens combined inside-out 6DOF (degrees of freedom) head-tracking and 26DOF hand-tracking technology. https://www.usens.com VividWorks is demonstrating its VividPlatform4, a Cloud-based 3D visual sales solution delivering photorealistic and user-friendly visualizations on-device and in-store. http://vividworks.com Wrnch is demonstrating BodySLAM, a human pose estimation engine, enabling anyone with a smartphone to capture human motion. https://wrnch.com Zappar is showcasing Zapbox, which does for mixed reality what Google Cardboard did for Virtual Reality. www.zappar.com Zenko Games is unveiling its first gaming title, Diamonst Augmented Reality RPG, based on geolocalized narrative arcs, strategic gameplay and virtual pet features. http://zenkogames.com To see AWE 2017's full agenda of more than 300 speakers, visit http://www.augmentedworldexpo.com/agenda. AWE (Augmented World Expo) is the largest AR+VR conference and expo showcasing technologies which are augmenting our human capabilities, turning ordinary experiences into the extraordinary and empowering people to be better at anything they do in work and life: Superpowers to the People. AWE USA 2017 will feature more than 300 speakers and 250 exhibitors leading the charge in augmented and virtual reality. Join over 5,000 attendees to explore over 100,000-square-feet of cutting-edge demonstrations. For more information and to register to attend, please visit www.awe2017.com. Journalists can request a complimentary press pass to Augmented Reality Expo 2017 by emailing jennifer@lightspeedpr.com. Please follow Augmented World Expo on twitter at @ARealityEvent and #AWE2017. AWE is produced by AugmentedReality.ORG a 501 (c) (6) non-profit organization. All profits are reinvested in AR.ORG's services. Its mission is to Advance Augmented Reality to Advance Humanity. It's Goal: 1 billion active users of AR by 2020. AR.org facilitates and catalyzes the global and regional transformation of the AR industry by To view the original version on PR Newswire, visit:http://www.prnewswire.com/news-releases/new-smartglasses-lens-technology-input-devices-wearable-gaming-system-and-more-announced-at-awe-usa-2017-300467391.html


Le Moan S.,TU Darmstadt | Urban P.,Fraunhofer Institute for Computer Graphics Research
IEEE Transactions on Image Processing | Year: 2014

We propose a new strategy to evaluate the quality of multi and hyperspectral images, from the perspective of human perception. We define the spectral image difference as the overall perceived difference between two spectral images under a set of specified viewing conditions (illuminants). First, we analyze the stability of seven image-difference features across illuminants, by means of an information-theoretic strategy. We demonstrate, in particular, that in the case of common spectral distortions (spectral gamut mapping, spectral compression, spectral reconstruction), chromatic features vary much more than achromatic ones despite considering chromatic adaptation. Then, we propose two computationally efficient spectral image difference metrics and compare them to the results of a subjective visual experiment. A significant improvement is shown over existing metrics such as the widely used root-mean square error. © 2014 IEEE.


Preiss J.,TU Darmstadt | Fernandes F.,TU Darmstadt | Urban P.,Fraunhofer Institute for Computer Graphics Research
IEEE Transactions on Image Processing | Year: 2014

While image-difference metrics show good prediction performance on visual data, they often yield artifact-contaminated results if used as objective functions for optimizing complex image-processing tasks. We investigate in this regard the recently proposed color-image-difference (CID) metric particularly developed for predicting gamut-mapping distortions. We present an algorithm for optimizing gamut mapping employing the CID metric as the objective function. Resulting images contain various visual artifacts, which are addressed by multiple modifications yielding the improved color-image-difference (iCID) metric. The iCID-based optimizations are free from artifacts and retain contrast, structure, and color of the original image to a great extent. Furthermore, the prediction performance on visual data is improved by the modifications. © 2013 IEEE.


Steger S.,Fraunhofer Institute for Computer Graphics Research
Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention | Year: 2012

This paper presents a novel skeleton based method for the registration of head&neck datasets. Unlike existing approaches it is fully automated, spatial relation of the bones is considered during their registration and only one of the images must be a CT scan. An articulated atlas is used to jointly obtain a segmentation of the skull, the mandible and the vertebrae C1-Th2 from the CT image. These bones are then successively rigidly registered with the moving image, beginning at the skull, resulting in a rigid transformation for each of the bones. Linear combinations of those transformations describe the deformation in the soft tissue. The weights for the transformations are given by the solution of the Laplace equation. Optionally, the skin surface can be incorporated. The approach is evaluated on 20 CT/MRI pairs of head&neck datasets acquired in clinical routine. Visual inspection shows that the segmentation of the bones was successful in all cases and their successive alignment was successful in 19 cases. Based on manual segmentations of lymph nodes in both modalities, the registration accuracy in the soft tissue was assessed. The mean target registration error of the lymph node centroids was 5.33 +/- 2.44 mm when the registration was solely based on the deformation of the skeleton and 5.00 +/- 2.38 mm when the skin surface was additionally considered. The method's capture range is sufficient to cope with strongly deformed images and it can be modified to support other parts of the body. The overall registration process typically takes less than 2 minutes.


Seibert H.,Fraunhofer Institute for Computer Graphics Research
Proceedings of the 2012 8th International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IIH-MSP 2012 | Year: 2012

The segmentation of the face in 3D reconstructions is a crucial processing step within 3D face recognition systems. At this early processing stage discarding other surface portions such as collars, hats or hairstyle shall reduce the amount of data. In contrast to other approaches, the proposed algorithm uses only the face geometry and is therefore robust with respect to lighting conditions or texture quality. Assuming the skin region of a face is locally flat and closed, a binary mask image is created. Morphology and a simple heuristic are applied on connected components to select and join appropriate components. The implementation is straight forward, yielding just a few parameters and copes the problem without training procedure. A proof of concept is given and results are shown for several cases, limitations of the approach are discussed. © 2012 IEEE.


Tazari M.-R.,Fraunhofer Institute for Computer Graphics Research
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Challenges of handling user interaction in Ambient Intelligence environments are manifold. The systems installed in these environments are highly distributed with dynamic configurations in terms of integrated devices and installed applications. Context-awareness, personalization, and multimodality are critical for supporting more natural interaction and optimizing the interaction in an adaptive way. Research activities have dealt with different specific problems in the field and now it is high time for moving towards an open framework with a more comprehensive solution. This paper introduces results of such work with a high level of freedom in developing and deploying applications without needing to care about the available I/O infrastructure. The other way around, the latter can be changed without worrying about the application side. The independence of applications from the available I/O infrastructure helps to share mechanisms and manage such a complex scene more adequately. The key idea behind this framework is the natural distribution of tasks according to the real scene using a middleware solution supporting seamless connectivity and goal-based interoperability. © 2010 Springer-Verlag Berlin Heidelberg.


Reitz T.,Fraunhofer Institute for Computer Graphics Research
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

Geospatial data offered by distributed services are often modeled with different conceptual schemas although they cover the same thematic area. To ensure interoperability of geospatial data, the existing heterogeneous conceptual schemas can be mapped to a common conceptual schema. However, the underlying formalized schema mappings are difficult to create, difficult to re-use and often contain mismatches of abstraction level, of scope difference, domain semantics and value semantics of the mapped entities. We have developed a novel approach to document and communicate such mismatches in the form of a Mismatch Description Language (MDL). This MDL can be transformed into various textual and cartographic representations to support users in communicating and understanding mismatches, and to assess the reusability of a mapping. © 2010 Springer-Verlag.


Kahn S.,Fraunhofer Institute for Computer Graphics Research
Virtual Reality | Year: 2013

Whereas 3D surface models are often used for augmented reality (e. g., for occlusion handling or model-based camera tracking), the creation and the use of such dense 3D models in augmented reality applications usually are two separated processes. The 3D surface models are often created in offline preparation steps, which makes it difficult to detect changes and to adapt the 3D model to these changes. This work presents a 3D change detection and model adjustment framework that combines AR techniques with real-time depth imaging to close the loop between dense 3D modeling and augmented reality. The proposed method detects the differences between a scene and a 3D model of the scene in real time. Then, the detected geometric differences are used to update the 3D model, thus bringing AR and 3D modeling closer together. The accuracy of the geometric difference detection depends on the depth measurement accuracy as well as on the accuracy of the intrinsic and extrinsic parameters. To evaluate the influence of these parameters, several experiments were conducted with simulated ground truth data. Furthermore, the evaluation shows the applicability of AR and depth image-based 3D modeling for model-based camera tracking. © 2011 Springer-Verlag London Limited.


Olbrich M.,Fraunhofer Institute for Computer Graphics Research
Proceedings, Web3D 2012 - 17th International Conference on 3D Web Technology | Year: 2012

X3D supports a variety of media types to be used in 3D scenes, like images, videos or other X3D models. A scene can dynamically load and replace this media during runtime, but since there is no way to communicate directly with outside sources like a server, all data sources need to be known in advance. This problem is usually solved by using interfaces like SAI, which allow external applications to modify the current scene. But this solution makes it necessary to set up all the communication via SAI and have the external application communicate with a server. In this paper, we will show how XMLHttpRequest, an object common in web browsers, can be used to handle the communication from within the X3D Browser. We will show how well this approach fits into the X3D environment and how easy this can be implemented in an X3D Browser. Afterwards, some examples will show the benefits in real applications and how easy this solution is to use. © 2012 ACM.


Aehnelt M.,Fraunhofer Institute for Computer Graphics Research | Bader S.,University of Rostock
ICAART 2015 - 7th International Conference on Agents and Artificial Intelligence, Proceedings | Year: 2015

Information assistance helps in many application domains to structure, guide and control human work processes. However, it lacks a formalisation and automated processing of background knowledge which vice versa is required to provide ad-hoc assistance. In this paper, we describe our conceptual and technical work to include contextual background knowledge in raising awareness, guiding, and monitoring the assembly worker. We present cognitive architectures as missing link between highly sophisticated manufacturing data systems and implicitly available contextual knowledge on work procedures and concepts of the work domain. Our work is illustrated with examples in SWI-Prolog and the Soar cognitive architecture.

Loading Fraunhofer Institute for Computer Graphics Research collaborators
Loading Fraunhofer Institute for Computer Graphics Research collaborators