Entity

Time filter

Source Type

London, United Kingdom

Zhang L.,University of Science and Technology of China | Zhang L.,Queensland University of Technology | Tjondronegoro D.,Queensland University of Technology | Chandran V.,Queensland University of Technology | Eggink J.,British Broadcasting Corporation BBC
Multimedia Tools and Applications | Year: 2015

Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights. © 2015 Springer Science+Business Media New York Source


Eggink J.,British Broadcasting Corporation BBC | Raimond Y.,British Broadcasting Corporation BBC
International Workshop on Image Analysis for Multimedia Interactive Services | Year: 2013

In this paper, we give an overview of recent BBC R&D work on automated affective and semantic annotations of BBC archive content, covering different types of use-cases and target audiences. In particular, after giving a brief overview of manual cataloguing practices at the BBC, we focus on mood classification, sound effect classification and automated semantic tagging. The resulting data is then used to provide new ways of finding or discovering BBC content. We describe two such interfaces, one driven by mood data and one driven by semantic tags. © 2013 IEEE. Source


Gabriellini A.,British Broadcasting Corporation BBC | Flynn D.,British Broadcasting Corporation BBC | Mrak M.,British Broadcasting Corporation BBC | Davies T.,British Broadcasting Corporation BBC | Davies T.,Cisco Systems
IEEE Journal on Selected Topics in Signal Processing | Year: 2011

New activities in the video coding community are focused on the delivery of technologies that will enable economic handling of future visual formats at very high quality. The key characteristic of these new visual systems is the highly efficient compression of such content. In that context this paper presents a novel approach for intra-prediction in video coding based on the combination of spatial closed- and open-loop predictions. This new tool, called Combined Intra-Prediction (CIP), enables better prediction of frame pixels which is desirable for efficient video compression. The proposed tool addresses both the rate-distortion performance enhancement as well as low-complexity requirements that are imposed on codecs for targeted high-resolution content. The novel perspective CIP offers is that of exploiting redundancy not only between neighboring blocks but also within a coding block. While the proposed tool enables yet another way to exploit spatial redundancy within video frames, its main strength is being inexpensive and simple for implementation, which is a crucial requirement for video coding of demanding sources. As shown in this paper, the CIP can be flexibly modeled to support various coding settings, providing a gain of up to 4.5% YUV BD-rate for the video sequences in the challenging High-Efficiency Video Coding Test Model. © 2011 IEEE. Source

Discover hidden collaborations