London, United Kingdom
London, United Kingdom

Time filter

Source Type

Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-19-2015 | Award Amount: 4.00M | Year: 2015

Object-based media is a revolutionary approach for creating and deploying interactive, personalised, scalable and immersive content, by representing it as a set of individual assets together with metadata describing their relationships and associations. This allows media objects to be assembled in groundbreaking ways to create new user experiences. Based on this paradigm, ORPHEUS specifically identified the, often neglected, audio sector to have high innovation potential. ORPHEUS will develop, implement and validate a completely new end-to-end object-based media chain for audio content. The partners will lay the foundation for facilitating infinite combinations of audio objects in ways that are flexible and responsive to user, environmental and platform-specific factors. This includes innovative tools for capturing, mixing, monitoring, storing, archiving, playing out, distributing and rendering object-based audio. Many media companies, including partners of this project, are ready to adopt this next-generation media representation but the challenges of real-world implementation and integration are yet to be tackled. ORPHEUS will deliver a sustainable solution, ensuring that workflows and components for object-based audio scale up to enable cost-effective commercial production, storage, re-purposing, play-out and distribution. The ultimate aim is to bring the fascinating experience of object-based content to mass audiences at no added cost. ORPHEUS will demonstrate the new prodigious user experience through the realisation of close-to-market workflows, proofing the economic viability of object-based audio as an emerging media and broadcast technology. Collectively, the project partners encompass all knowledge and skills necessary to achieve these objectives. To further foster the development of new capabilities in the European content creation industry, ORPHEUS will publish a reference architecture and guidelines on how to implement object-based audio chains.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-26-2016 | Award Amount: 5.31M | Year: 2017

MULTIDRONE aims to develop an innovative, intelligent, multi-drone platform for media production to cover outdoor events, which are typically held over wide areas (at stadium/city level). The 4-10 drone team, to be managed by the production director and crew, will have: a) increased decisional autonomy, by minimizing production crew load and interventions and b) improved robustness, security and safety mechanisms (e.g., embedded flight regulation compliance, enhanced crowd avoidance, autonomous emergency landing, communications security), enabling it to carry out its mission even against adverse conditions or crew inaction and to handle emergencies. Such robustness is particularly important, as the drone team has to operate close to crowds and may face an unexpected course of events and/or environmental hazards. Therefore, it must be contextually aware and adaptive with improved perception of crowds, individual people and other hazards. As this multi-actor system will be heterogeneous, consisting of multiple drones and the production crew, serious human-in-the-loop issues will be addressed to avoid operator overload, with the goal of maximizing shooting creativity and productivity, whilst minimizing production costs. Overall, MULTIDRONE will boost research on multiple-actor systems by proposing novel multiple-actor functionalities and performance metrics. Furthermore, the overall multidrone system will be built to serve identified end user needs. Specifically, innovative, safe and fast multidrone audiovisual shooting will provide a novel multidrone cinematographic shooting genre and new media production techniques that will have a large impact on the financially important EU broadcasting/media industry. It will boost production creativity by allowing the creation of rich/novel media output formats, improving event coverage, adapting to event dynamics and offering rapid reaction speed to unexpected events.


Grant
Agency: European Commission | Branch: FP7 | Program: CP | Phase: FI.ICT-2011.1.8 | Award Amount: 20.32M | Year: 2013

The FI-CONTENT 2 project aims at establishing the foundation of a European infrastructure for promoting and testing novel uses of audio-visual content on connected devices. The partners will develop and deploy advanced platforms for Social Connected TV, Mobile Smart City services, and Gaming/ Virtual worlds. To assess the approach and improve these platforms, user communities in 6 European locations will be activated for living lab and field trials. The project is strongly supported by local stakeholders (regional authorities, associations, educational organizations, user groups) who will participate in the project via User Advisory Boards. The technical capabilities of the platforms will be validated and improved by integrating new - content usage driven - partners recruited via the open call planned early in the project.\nIn FI-CONTENT (FI-PPP Phase 1), we demonstrated that challenging and bold assertions around next generation Internet content and technology needs are best assessed with radical yet practical demonstrators, use cases, APIs and field research. FI-CONTENT 2 builds on our work in Phase 1, refining the findings where appropriate.\nThe project has good relationships with the other projects of the FI-PPP program. Contacts have been taken for coordination and potentially joint experiments with other FI-PPP projects. The proposal shows how to work with FI-WARE and existing EU infrastructure projects where suitable, and demonstrates how best to create and define new domain specific technologies, mostly cloud based.\nThe FI-CONTENT 2 partnership is a balanced group of large industrial, Content and Media companies, technology suppliers, Telecommunications/Internet access operators, Living labs and Academic institutions. FI-CONTENT-2 harnesses the power and excitement of content on the new Internet to drive European innovation, content creation and distribution to enrich the lives of all Europeans.


Grant
Agency: European Commission | Branch: H2020 | Program: RIA | Phase: ICT-16-2015 | Award Amount: 7.96M | Year: 2016

Media monitoring enables the global news media to be viewed in terms of emerging trends, people in the news, and the evolution of story-lines. The massive growth in the number of broadcast and Internet media channels means that current approaches can no longer cope with the scale of the problem. The aim of SUMMA is to significantly improve media monitoring by creating a platform to automate the analysis of media streams across many languages, to aggregate and distil the content, to automatically create rich knowledge bases, and to provide visualisations to cope with this deluge of data. SUMMA has six objectives: (1) Development of a scalable and extensible media monitoring platform; (2) Development of high-quality and richer tools for analysts and journalists; (3) Extensible automated knowledge base construction; (4) Multilingual and cross-lingual capabilities; (5) Sustainable, maintainable platform and services; (6) Dissemination and communication of project results to stakeholders and user group. Achieving these aims will require advancing the state of the art in a number of technologies: multilingual stream processing including speech recognition, machine translation, and story identification; entity and relation extraction; natural language understanding including deep semantic parsing, summarisation, and sentiment detection; and rich visualisations based on multiple views and dealing with many data streams. The project will focus on three use cases: (1) External media monitoring - intelligent tools to address the dramatically increased scale of the global news monitoring problem; (2) Internal media monitoring - managing content creation in several languages efficiently by ensuring content created in one language is reusable by all other languages; (3) Data journalism. The outputs of the project will be field-tested at partners BBC and DW, and the platform will be further validated through innovation intensives such as the BBC NewsHack.


Grant
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-19-2015 | Award Amount: 4.76M | Year: 2015

Surveys show that, although around 80% of people are using a second device (phone, tablet or laptop) when watching TV only 20% of them are engaging with companion content.The 2-IMMERSE project will innovate around the delivery of experiences that are created to be multi-screen in production yet delivered to be flexible across single and multiple-screens and responsive to the preferences of individual audience members. We will build and trial services that deliver multi-screen experiences of high value content including theatre, Grand Prix motorcycle racing and live professional football in the home, in school and in public venues. The resulting delivery platform will be open to extension by third parties enabling new genres of multi-screen experiences to be created for content beyond sport and Drama. New multi-screen services will merge content from broadcast and broadband sources and support new visualisations, viewpoints, data and replay facilities in addition to social network functionality such as chatting, commenting, polling The goal of 2-IMMERSE is to allow TV service providers to break free from the constraints of rendering a broadcast stream onto a single 16x9 frame and to develop compelling experiences that combine synchronised, interactive and customisable content service applications to provide individual and shared content customized to the number and type of screens available and the preferences of the audience. In doing so our aim is to open up to audiences capabilities for configuration and control only available currently to producers and presenters (e.g. show replay again here, keep the leaderboard on this tablet and the view of the crowd on that second wall screen).


Grant
Agency: European Commission | Branch: H2020 | Program: IA | Phase: ICT-19-2015 | Award Amount: 4.07M | Year: 2016

COGNITUS will deliver innovative ultra-high definition (UHD) broadcasting technologies that allow the joint creation of UHD media exploiting the knowledge of professional producers, the ubiquity of user generated content (UGC), and the power of interactive networked social creativity in a synergistic multimedia production approach. The project will provide a proof of concept to cement the viability of interactive UHD content production and exploitation, through use case demonstrators at large events of converging broadcast and user generated content for interactive UHD services. The envisaged demonstrators will be based on two different use cases drawn from real-life events. These use cases are in turn single examples of the fascinating and potentially unlimited new services that could be unleashed by the un-avoidable confluence of UHD broadcasting technology and smart social mobile UGC brought by COGNITUS. Building on recent technological advances, in UHD broadcasting and mobile social multimedia sharing coupled together with over fifty years of research and development in multimedia systems technology, mean that the time is now ripe for integrating research outputs towards solutions that support high-quality, user sourced, on demand media to enrich the conventional broadcasting experience. COGNITUS vision is to deliver a compelling proof of concept for the validity, effectiveness and innovative power of this integrated approach. As a consequence, over 36 months the project will demonstrate the ability to bring a new range of dedicated media services into the European broadcasting sector, adding critical value to both media and creativity sectors.


Patent
British Broadcasting Corporation | Date: 2016-08-12

In a method of video coding, in which a difference is formed between input picture values and picture prediction values and that difference is transforming with a DCT, the picture prediction is formed as: P=(1c)P^(c)+CP^(O )where P^(C )is a closed loop predictor which is restricted to prediction values capable of exact reconstruction in a downstream decoder and P^(O )is a spatial predictor which is not restricted to prediction values capable of exact reconstruction. The factor can vary from zero to unity depending on a variety of parameters.


Patent
British Broadcasting Corporation | Date: 2016-06-29

Described are concepts, systems and techniques related to processing an input video signal intended for a first display to produce an output signal appropriate for a second display. The concepts, systems and techniques include converting using one or more transfer functions arranged to provide relative scene light values and remove or apply rendering intent of the input or output video signal, wherein the removing or applying rendering intent alters luminance.


Grant
Agency: GTR | Branch: Innovate UK | Program: | Phase: Collaborative Research & Development | Award Amount: 1.22M | Year: 2014

REFRAME is a two-year project to research, develop and demonstrate new methods of real-time media production for film, broadcast and interactive media, taking unconstrained on-set performance capture as a starting point. The technological innovations include real-time performance capture and on-set feedback; real-time scene analysis and planar segmentation; automated metadata extraction for media enrichment; and new methods of displaying enriched media for audience interaction. The project will create new algorithms, software prototypes, interfaces and media viewers, which will subsequently be exploited by direct sales, IP licensing and the provision of new production services. The BBC co-ordinates a partnership with two market-leading SMEs (Imaginarium Studios and Imagineer Systems) and the University of Surrey Visual Media Research Group. The BBC and Imaginarium will trial the technology and evaluate the results with users


Patent
British Broadcasting Corporation | Date: 2016-06-01

Video encoding or decoding utilising a spatial transform operating on rows and columns of a block, with a set of transform skip modes including: transform on rows and columns; transform on rows only; transform on columns only; no transform. An indication of the selected mode is provided to the decoder. Coefficients are scaled by a factor dependent upon the norm of the transform vector of the skipped transform to bring the untransformed image values to the same level as transformed coefficients.

Loading British Broadcasting Corporation collaborators
Loading British Broadcasting Corporation collaborators