Entity

Time filter

Source Type

Villingen-Schwenningen, Germany

The signal model of a superresolution optical channel can be an efficient tool for developing components of an associated high-density optical disc system. While the behavior of the laser diode, aperture, lens, and detector are properly described, a general mathematical model of the superresolution disc itself has not yet been available until recently. Different approaches have been made to describe the properties of a mask layer, mainly based on temperature- or power-dependent nonlinear effects. A complete signal-based or phenomenological optical channel model-from non-return-to-zero inverted input to disc readout signal-has recently been developed including the reflectivity of a superresolution disc with InSb used for the mask layer. In this contribution, the model is now extended and applied to a moving disc including a land-and-pit structure, and results are compared with data read from real superresolution discs. Both impulse response and resolution limits are derived and discussed. Thus the model provides a bridge from physical to readout signal properties, which count after all. The presented approach allows judging of the suitability of a mask layer material for storage density enhancement already based on static experiments, i.e., even before developing an associated disc drive. © 2011 Optical Society of America. Source


Hepper D.,Deutsche Thomson OHG
2011 14th ITG Conference on Electronic Media Technology, CEMT 2011 - Conference Proceedings | Year: 2011

A complete optical channel model of a super-resolution optical disc system is developed that includes not only the optical pick-up but also the super-resolution disc, using InSb for the mask layer. This is a candidate technology for a next generation of optical discs after the Blu-ray Disc. Model parameters have been derived to match former static experiments. The model is extended to a moving disc. The impulse response of the optical channel is determined from the model and compared with real disc readout data. The model provides a link from physics to electronic signals and allows performance analysis of a mask layer material in question already before optical drive development. © 2011 Informatik Centrum. Source


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2007.1.5 | Award Amount: 15.21M | Year: 2008

Film or cinema has been the driving force for the entertainment industry, setting the standards of quality, providing the most compelling experience, and feeding the distribution chains of other media (broadcast TV, cable and satellite channels, DVD, video, games et cetera). The creation of a complete `3-D capable chain is expected to follow a similar path. The media industry knows that astonishing the public is still a route to large audiences and financial success. 2020 3D Media proposes to research, develop, and demonstrate novel forms of compelling entertainment experience based on technologies for the capture, production, networked distribution and display of sounds and images in three-dimensions. 2020 3D Media will add extra dimensions to Digital Cinema and create new forms of stereoscopic and immersive networked media for the home and public spaces. The goal is to research and develop technologies to support the acquisition, coding, editing, networked distribution, and display of stereoscopic and immersive audiovisual content to provide novel forms of compelling entertainment experience in the home or public spaces. The users of the resulting technologies will be media industry professionals across the current film, TV and `new media sectors to make programme material addressing the general public. The key will be the creation of technologies for creating and presenting surround video as a viable system, based on recognised standards. This will require innovations and knew knowledge in: - Technologies and formats for 3D sound and image capture and coding, including novel high-resolution cameras - Technologies and methods for 3-D postproduction of sound and images - Technologies for the distribution and display of spatial media - The creative application of spatial media technologies


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2013.1.6 | Award Amount: 5.23M | Year: 2013

ICoSOLE aims at developing a platform that enables users to experience live events which are spatially spread out, such as festivals (e.g. Gentse feesten in Belgium, Glastonbury in the UK), parades, marathons or bike races, in an immersive way by combining high-quality spatial video and audio and user generated content. The project will develop a platform for a context-adapted hybrid broadcast-Internet service, providing efficient tools for capture, production and distribution of audiovisual content captured by a heterogeneous set of devices spread over the event site.\nThe approach uses a variety of sensors, ranging from mobile consumer devices over professional broadcast capture equipment to panoramic and/or free-viewpoint video and spatial audio. Methods for streaming live high-quality audiovisual content from mobile capture devices to content acquisition, processing and editing services will be developed.\nIn order to combine the heterogeneous capture sources, ICoSOLE will research and develop approaches for integration of content from professional and consumer capture devices, including mobile (and moving) sensors, based on metadata and content analysis. Methods for fusing visual and audio information into a format agnostic data representation will be developed, which enable rendering video and audio for virtual viewer/listener positions.\nICoSOLE will develop efficient tools for media production professionals to select, configure and review the content sources being used. These tools capture, extract and annotate metadata during the production process and integrate this metadata throughout the entire production chain to the end user.\nContent will be provided via broadcast, enhanced by additional content transported via broadband and novel interaction possibilities for second screen and web consumption. The content will also be provided in an adapted form to mobile devices, with specific location-based functionalities for users at or near the place of the event.


Grant
Agency: Cordis | Branch: FP7 | Program: CP | Phase: ICT-2009.1.5 | Award Amount: 14.15M | Year: 2010

FascinatE will create an innovative end-to-end system and associated standards for future immersive and interactive TV services. It will allow users to navigate around an ultra-high resolution video panorama, showing a live or recorded event, with matching accompanying audio. The output will be adapted to their particular device, covering anything from a mobile handset to an immersive panoramic display with surround sound, delivering a true personalized multi-screen experience.\nAt the production side, this requires new scene capturing systems, using multiple microphones and cameras with different fields-of-view and frame rates. These various video signals, together with metadata describing their relative alignment, constitute a novel layered scene representation. From this, any particular portion can be rendered at any desired resolution. This represents a paradigm shift in production technology, from todays format-specific view of an area selected by a cameraman to a format-agnostic representation of the whole scene. This approach is considered to be a more intelligent and future-proof alternative to other approaches, which just increase the resolution of the pictures (e.g. to 8k).\nScript metadata will describe shot framing as suggested by the supervising director. Rule-based systems will frame these regions in a subjectively-appealing manner, taking into account knowledge of how to adapt them to different display sizes, as well as the personal preferences and interactions of the user.\nIntelligent network components will tailor the transmitted data to suit the screen size and selected view for each terminal. For low-power devices, the component itself will render the desired view, whereas for powerful devices, better performance will be achieved through selectively transmitting portions of the relevant scene layers.\nAt the user terminal, novel interaction methods will allow viewers to choose either a script-driven view or to freely explore the scene themselves

Discover hidden collaborations