Time filter

Source Type


De Sutter R.,Ghent University | Braeckman K.,VRT Medialab
SMPTE Motion Imaging Journal | Year: 2011

Soon, it will no longer be sufficient for only archivists to annotate audiovisual material. Not only is the number of archivists limited, but the time they spend on annotating one item is insufficient to create time-based and detailed descriptions about the content to make fully optimized video search possible. Furthermore, as a result of file-based production methods, we observe an accelerated increase in newly created audiovisual material that must be described. Fortunately, highquality feature extraction (FE) tools are increasingly being developed by research institutes. These tools examine the audiovisual essence and return particular information about the analyzed video, audio, or both streams. For example, the tools can automatically detect shot boundaries, detect and recognize faces and objects, and segment audio streams. As a result, they quickly and cheaply generate metadata that can be used for indexing and searching. In addition, they relieve archivists of the need to perform tedious, repetitive, but necessary low-added value tasks, such as identifying within an audio stream the speech and the music segments. Although most tools are not yet commercially offered, these solutions are expected to become available soon for broadcasters and media companies alike. This paper describes a solution for integrating such FE tools within the annotation workflow of a media company. This solution, in the form of an architecture and workflow, is scalable, extensible, and loosely coupled and has clear and easy-toimplement interfaces. As such, our architecture allows additional tools to be plugged in irrespective of the software and hardware used by the media company. By integrating FE tools within the workflow of the annotating audiovisual essence, more and better metadata can be created, allowing other tools to improve indexing, search, and retrieval of media material within audiovisual archives. © 2011 by SMPTE.

Ranaivoson H.,IBBT SMIT VUB | Donders K.,IBBT SMIT and IES VUB | Ballon P.,IBBT SMIT VUB | Sorgeloos H.,VRT Medialab
EuroITV'11 - Proceedings of the 9th European Interactive TV Conference | Year: 2011

We tackle the issue of innovation policy in the specific context of a small media sector, i.e. the Flemish one. The research question is twofold; firstly, asking which types of innovation should be supported by government; secondly, asking how to organise innovation policy in the broadcasting (and by extension media) sector. This study is based on a literature study, document analysis and several stakeholder interviews. © 2011 ACM.

Messina A.,RAI Radiotelevision Italiana | De Sutter R.,VRT Medialab | Evain J.-P.,European Union | Sano M.,NHK Science and Technology | Friedland G.,International Computer Science Institute
MM'10 - Proceedings of the ACM Multimedia 2010 International Conference | Year: 2010

The third Workshop on Automated Information Extraction in Media Production (AIEMPro10) aims at fostering exchange of ideas and of practices between leading experts in research and leading actors in the media community, in order to catalyze the migration towards new ways of producing media content, aided by large scale introduction of tools for automated multimedia analysis and understanding. Furthermore, the workshop helps researchers in better understanding what are some real-life key requirements which would enable their scientific developments come into wider adoption.

Van Rijsselbergen D.,Ghent University | Poppe C.,Ghent University | Verwaest M.,VRT Medialab | Mannens E.,Ghent University | Van De Walle R.,Ghent University
Multimedia Tools and Applications | Year: 2012

In order to provide audiences with a proper universal multimedia experience, all classes of media consumption devices, from high definition displays to mobile media players, must receive a product that is not only adapted to their capabilities and usage environments, but also conveys the semantics and cinematography behind the narrative in an optimal way. This paper introduces a semantic video adaptation system that incorporates the media adaptation process in the center of the drama production process. Producers, directors and other creative staff instruct the semantic adaptation system using common cinematographic terminology and vocabulary, thereby seamlessly extending the drama production process into the realm of content adaptation. The multitude of production metadata obtained from various steps in the production process provides a valuable context of narrative semantics that is exploited by the adaptation process. As such, high definition imagery can be intelligently adapted to smaller resolutions while optimally fulfilling the filmmaker's dramatic intentions with respect to the original narrative and obeying various rules of cinematographic grammar. © 2011 Springer Science+Business Media, LLC.

De Sutter R.,VRT Medialab | Matton M.,VRT Medialab | Laukens N.,VRT Medialab | Van Rijsselbergen D.,Ghent University | Van De Walle R.,Ghent University
Proceedings - 7th International Conference on Digital Content, Multimedia Technology and Its Applications, IDCTA 2011 | Year: 2011

As the consumer is becoming digital - i.e. he has personal mobile and always internet-connected devices allowing him to create a digital footprint anytime, anywhere - new opportunities arise for the "classic" broadcast industry to set up and maintain a direct relationship with their TV-viewers and radio-listeners. Until recently, a broadcaster had a one way connection with its customers, namely from the broadcaster, over the TV and radio distribution channel to the physical TV screen or radio set. Interaction was only possible after implementing and deploying expensive and hard-to-develop software on the set-top box. By employing web technology intelligently, a broadcaster can now more easily connect to its consumers and build a direct relationship. In this paper, we will discuss how to set up such a system and what the particular needs are in a broadcast context. We will use the second screen to collect data and enrich it in order to become beneficial information for the broadcasters and the advertisers. © 2011 AICIT.

Discover hidden collaborations