London, United Kingdom
London, United Kingdom

Time filter

Source Type

Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 505.83K | Year: 2013

The human visual system has been fine-tuned over generations of evolution to operate effectively in our particular environment, allowing us to form rich 3D representations of the objects around us. The scenes that we encounter on a daily basis produce 2D retinal images that are complex and ambiguous. From this input, how does the visual system achieve the immensely difficult goal of recovering our surroundings, in such an impressively fast and robust way? To achieve this feat, humans must use two types of information about their environment. First, we must learn the probabilistic relationships between 3D natural scene properties and the 2D image cues these produce. Second, we must learn which scene structures (shapes, distances, orientations) are most common, or probable in our 3D environment. This statistical knowledge about natural 3D scenes and their projected images allows us to maximize our perceptual performance. To better understand 3D perception, therefore, we must study the environment that we have evolved to process. A key goal of our research is to catalogue and evaluate the statistical structure of the environment that guides human depth perception. We will sample the range of scenes that humans frequently encounter (indoor and outdoor environments over different seasons and lighting conditions). For each scene, state-of-the-art ground based Light Detection and Ranging (LiDAR) technology will be used to measure the physical distance to all objects (trees, ground, etc.) from a single location - a 3D map of the scene. We will also take High Dynamic Range (HDR) photographs of the same scene, from the same vantage point. By collating this paired 3D and 2D data across numerous scenes we will create a comprehensive database of our environment, and the 2D images that it produces. By making the database publicly available it will facilitate not just our own work, but research by human and computer vision scientists around the world who are interested in a range of pure and applied visual processes. There is great potential for computer vision to learn from the expert processor that is the human visual system: computer vision algorithms are easily out-performed by humans for a range of tasks, particularly when images correspond to more complex, realistic scenes. We are still far from understanding how the human visual system handles the kind of complex natural imagery that defeats computer vision algorithms. However, the robustness of the human visual system appears to hinge on: 1) exploiting the full range of available depth cues and 2) incorporating statistical priors: information about typical scene configurations. We will employ psychophysical experiments, guided by our analyses of natural scenes and their images, to develop valid and comprehensive computational models of human depth perception. We will concentrate our analysis and experimentation on key tasks in the process of recovering scene structure - estimating the location, orientation and curvature of surface segments across the environment. Our project addresses the need for more complex and ecologically valid models of human perception by studying how the brain implicitly encodes and interprets depth information to guide 3D perception. Virtual 3D environments are now used in a range of settings, such as flight simulation and training systems, rehabilitation technologies, gaming, 3D movies and special effects. Perceptual biases are particularly influential when visual input is degraded, as they are in some of these simulated environments. To evaluate and improve these technologies we require a better understanding of 3D perception. In addition, the statistical models and inferential algorithms developed in the project will facilitate the development of computer vision algorithms for automatic estimation of depth structure in natural scenes. These algorithms have many applications, such as 2D to 3D film conversion, visual surveillance and biometrics.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 4.57M | Year: 2014

EPSRC Centre for Doctoral Training in Digital Entertainment University of Bath and Bournemouth University The Centre for Digital Entertainment (CDE) supports innovative research projects in digital media for the games, animation, visual effects, simulation, cultural and healthcare industries. Being an Industrial Doctorate Centre, CDEs students spend one year being trained at the university and then complete three years of research embedded in a company. To reflect the practical nature of their research they submit for an Engineering Doctorate degree. Digital media companies are major contributors to the UK economy. They are highly-respected internationally and find their services in great demand. To meet this demand they need to employ people with the highest technical skills and the imagination to use those skills to a practical end. The sector has become so successful that the shortage of such people now constrains them from expanding further. Our Doctoral Training Centre is already addressing that and has become the national focus for this kind of training. We do this by combining core taught material with an exciting and unusual range of activities designed to challenge and extend the students knowledge beyond the usual boundaries. By working closely with companies we can offer practical challenges which really push the limits of what can be done with digital media and devices, and by the people using them. We work with many companies and 40-50 students at any one time. As a result we are able to support the group in ways which would not be possible for individual students. We can place several students in one company, we can send teams to compete in programming competitions, and we can send groups to international training sessions. This proposal is to extend and expand this successful Centre. Major enhancements will include use of internationally leading industry experts to teach Master Classes, closer cooperation between company and university researchers, business training led by businesses and options for international placements in an international industry. We will replace the entire first year teaching with a Digital Media programme specifically aimed at these students as a group. The graduates from this Centre will be the technical leaders of the next generation revolution in this fast-moving, demanding and exciting industry.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Research Grant | Award Amount: 4.56M | Year: 2016

Today we use many objects not normally associated with computers or the internet. These include gas meters and lights in our homes, healthcare devices, water distribution systems and cars. Increasingly, such objects are digitally connected and some are transitioning from cellular network connections (M2M) to using the internet: e.g. smart meters and cars - ultimately self-driving cars may revolutionise transport. This trend is driven by numerous forces. The connection of objects and use of their data can cut costs (e.g. allowing remote control of processes) creates new business opportunities (e.g. tailored consumer offerings), and can lead to new services (e.g. keeping older people safe in their homes). This vision of interconnected physical objects is commonly referred to as the Internet of Things. The examples above not only illustrate the vast potential of such technology for economic and societal benefit, they also hint that such a vision comes with serious challenges and threats. For example, information from a smart meter can be used to infer when people are at home, and an autonomous car must make quick decisions of moral dimensions when faced with a child running across on a busy road. This means the Internet of Things needs to evolve in a trustworthy manner that individuals can understand and be comfortable with. It also suggests that the Internet of Things needs to be resilient against active attacks from organised crime, terror organisations or state-sponsored aggressors. Therefore, this project creates a Hub for research, development, and translation for the Internet of Things, focussing on privacy, ethics, trust, reliability, acceptability, and security/safety: PETRAS, (also suggesting rock-solid foundations) for the Internet of Things. The Hub will be designed and run as a social and technological platform. It will bring together UK academic institutions that are recognised international research leaders in this area, with users and partners from various industrial sectors, government agencies, and NGOs such as charities, to get a thorough understanding of these issues in terms of the potentially conflicting interests of private individuals, companies, and political institutions; and to become a world-leading centre for research, development, and innovation in this problem space. Central to the Hub approach is the flexibility during the research programme to create projects that explore issues through impactful co-design with technical and social science experts and stakeholders, and to engage more widely with centres of excellence in the UK and overseas. Research themes will cut across all projects: Privacy and Trust; Safety and Security; Adoption and Acceptability; Standards, Governance, and Policy; and Harnessing Economic Value. Properly understanding the interaction of these themes is vital, and a great social, moral, and economic responsibility of the Hub in influencing tomorrows Internet of Things. For example, a secure system that does not adequately respect privacy, or where there is the mere hint of such inadequacy, is unlikely to prove acceptable. Demonstrators, like wearable sensors in health care, will be used to explore and evaluate these research themes and their tension. New solutions are expected to come out of the majority of projects and demonstrators, many solutions will be generalisable to problems in other sectors, and all projects will produce valuable insights. A robust governance and management structure will ensure good management of the research portfolio, excellent user engagement and focussed coordination of impact from deliverables. The Hub will further draw on the expertise, networks, and on-going projects of its members to create a cross-disciplinary language for sharing problems and solutions across research domains, industrial sectors, and government departments. This common language will enhance the outreach, development, and training activities of the Hub.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Training Grant | Award Amount: 3.35M | Year: 2014

Our 21st century lives will be increasingly connected to our digital identities, representations of ourselves that are defined from trails of personal data and that connect us to commercial and public services, employers, schools, families and friends. The future health of our Digital Economy rests on training a new generation of leaders who can harness the emerging technologies of digital identity for both economic and societal value, but in a fair and transparent manner that accommodates growing public concern over the use of personal data. We will therefore train a community of 80 PhD students with the interdisciplinary skills needed to address the profound challenges of digital identity in the 21st century. Our training programme will equip students with a unique blend of interdisciplinary skills and knowledge across three thematic aspects of digital identity - enabling technologies, global impacts and people and society - while also providing them with the wider research and professional skills to deliver a research project across the intersection of at least two of these. Our students will be situated within Horizon, a leading centre for Digital Economy research and a vibrant environment that draws together a national research Hub, CDT and a network of over 100 industry, academic and international partners. Horizon currently provides access to a large network of over 75 potential supervisors, ranging from from leading Professors to talented early career researchers. Each student will work with an industry, public, third sector or international partner to ensure that their research is grounded in real user needs, to maximise its impact, and also to enhance their employability. These external partners will be involved in co-sponsorship, supervision, providing resources and hosting internships. Our external partners have already committed to co-sponsor 30 students so far, and we expect this number to grow. Our centre also has a strong international perspective, working with international partners to explore the global marketplace for digital identity services as well as the cross-cultural issues that this raises. This will build on our success in exporting the CDT model to China where we have recently established a £17M International Doctoral Innovation Centre to train 50 international students in digital economy research with funding from Chinese partners. We run an integrated four-year training programme that features a bespoke core covering key topics in digital identity, optional advanced specialist modules, practice-led team and individual projects, training in research methods and professional skills, public and external engagement, and cohort building activities including an annual writing retreat and summer school. The first year features a nine month structured process of PhD co-creation in which students, supervisors and external partners iteratively refine an initial PhD topic into a focused research proposal. Building on our experience of running the current Horizon CDT over the past five years, our management structure responds to external, university and student input and manages students through seven key stages of an extended PhD process: recruitment, induction, taught programme, PhD co-creation, PhD research, thesis, and alumni. Students will be recruited onto and managed through three distinct pathways - industry, international and institutional - that reflect the funding, supervision and visiting constraints of working with varied external partners.


Zhang L.,University of Science and Technology of China | Zhang L.,Queensland University of Technology | Tjondronegoro D.,Queensland University of Technology | Chandran V.,Queensland University of Technology | Eggink J.,British Broadcasting Corporation BBC
Multimedia Tools and Applications | Year: 2015

Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights. © 2015 Springer Science+Business Media New York


Gabriellini A.,British Broadcasting Corporation BBC | Flynn D.,British Broadcasting Corporation BBC | Mrak M.,British Broadcasting Corporation BBC | Davies T.,British Broadcasting Corporation BBC | Davies T.,Cisco Systems
IEEE Journal on Selected Topics in Signal Processing | Year: 2011

New activities in the video coding community are focused on the delivery of technologies that will enable economic handling of future visual formats at very high quality. The key characteristic of these new visual systems is the highly efficient compression of such content. In that context this paper presents a novel approach for intra-prediction in video coding based on the combination of spatial closed- and open-loop predictions. This new tool, called Combined Intra-Prediction (CIP), enables better prediction of frame pixels which is desirable for efficient video compression. The proposed tool addresses both the rate-distortion performance enhancement as well as low-complexity requirements that are imposed on codecs for targeted high-resolution content. The novel perspective CIP offers is that of exploiting redundancy not only between neighboring blocks but also within a coding block. While the proposed tool enables yet another way to exploit spatial redundancy within video frames, its main strength is being inexpensive and simple for implementation, which is a crucial requirement for video coding of demanding sources. As shown in this paper, the CIP can be flexibly modeled to support various coding settings, providing a gain of up to 4.5% YUV BD-rate for the video sequences in the challenging High-Efficiency Video Coding Test Model. © 2011 IEEE.


Eggink J.,British Broadcasting Corporation BBC | Raimond Y.,British Broadcasting Corporation BBC
International Workshop on Image Analysis for Multimedia Interactive Services | Year: 2013

In this paper, we give an overview of recent BBC R&D work on automated affective and semantic annotations of BBC archive content, covering different types of use-cases and target audiences. In particular, after giving a brief overview of manual cataloguing practices at the BBC, we focus on mood classification, sound effect classification and automated semantic tagging. The resulting data is then used to provide new ways of finding or discovering BBC content. We describe two such interfaces, one driven by mood data and one driven by semantic tags. © 2013 IEEE.


Rossignoli C.M.,Non Governmental Organizations NGOs | Iacovo F.D.,Non Governmental Organizations NGOs | Moruzzo R.,British Broadcasting Company BBC | Scarpellini P.,Non Governmental Organizations NGOs
New Medit | Year: 2015

As an effect of a protracted situation of conflict, the economy in Gaza Strip ha largely developed through international humanitarian assistance. Over the time the isolation of markets, widespread unemployment, and the economic crisis hav caused a serious decline in the population living standards, with a high level o food insecurity. Today, the population in Gaza Strip has dramatically increase reaching an estimated 1.65 million in an area of only 360 km2 (PCBS). The rapi increase in urban population (3.2% of yearly growth rate), land scarcity and th challenge of food security have accelerated the phenomenon of urban agriculture In Gaza Strip, despite many constraints, agriculture and related activities are stil offering the opportunity of food, income and employment for the local population By participating in activities related to projects of international cooperation promotin the dairy cattle sector we have investigated ways of breeding cattle an proposed a reflection on the sector, highlighting the main strengths, weaknesses opportunities and constraints. We have also explored the livelihoods of dairy cattl keepers and analysed related resilience and sustainability.


Grant
Agency: GTR | Branch: EPSRC | Program: | Phase: Fellowship | Award Amount: 1.14M | Year: 2014

The Internet of Things (IoT) seeks to embed computation in mundane objects - pill bottles, shelves, weighing scales, etc. - and connect these things to the internet to enable a broad range of new and improved services (e.g. improved healthcare services). The IoT will open up a wealth of personal data - biological data, data about our physical movements around the home, or interaction with objects therein, etc. - distributing it seamlessly across the internet. The seamless distribution of personal data presents real challenges to the adoption of the IoT by users, however. Personal data is not shared blindly and there is need to understand how personal data transaction in the IoT can be made observable to users and available to their control if the projected benefits of the IoT are to come about. There is need, in other words, to understand what needs to be done to make the IoT accountable to users so that they can understand what data is being gathered, what is being done with it, and by whom, and to enable personal data management. The need for accountability leads to a concern with articulation work - i.e., making personal data transactions visible and available to user control. This fellowship seeks to engage industry and end-users in the co-design of awareness and control mechanisms that specify requirements for the support of articulation work. It seeks to do so in the context of the home - one of the most personal of settings in society and a key site for future personal data harvesting. Industry is engaged in the development of use cases specifying future IoT applications that exploit personal data across different infrastructures penetrating the home. The use cases are grounded in ethnographic studies of current interfaces to infrastructure and the personal data transaction models that accompany them. Current and future understandings are combined in provotypes - provocative mock ups of new technological arrangements - which are subject to end-user evaluation to shape and refine articulation mechanisms around user need and which foster user trust in the IoT.

Loading British Broadcasting Corporation BBC collaborators
Loading British Broadcasting Corporation BBC collaborators