Information Technologies Institute ITI

Thérmi, Greece

Information Technologies Institute ITI

Thérmi, Greece
SEARCH FILTERS
Time filter
Source Type

Vafeiadis T.,Information Technologies Institute ITI | Krinidis S.,Information Technologies Institute ITI | Ziogou C.,Center for Research and Technology Hellas | Ioannidis D.,Information Technologies Institute ITI | And 2 more authors.
IEEE International Conference on Industrial Informatics (INDIN) | Year: 2017

In this work, a modified version of a Slope Statistic Profile (SSP) method is proposed, capable to detect real-time incidents that occur in two interdependent time series. The estimation of incident time point is based on the combination of their linear trend profiles test statistics, computed on a consecutive overlapping data window. Furthermore, the proposed method uses a self-adaptive sliding data window. The adaptation of the size of the sliding data window is based on real-time classification of the linear trend profiles in constant and equal time intervals, according to two different linear trend scenarios, suitably adjusted to the conditions of the problem we face. The proposed method is used for the robust identification of a malfunction and it is demonstrated to real datasets from a chemical process pilot plant that is situated at the premises of CERTH / CPERI during the evolution of the performed experiments at the process unit. © 2016 IEEE.


Markatopoulou F.,Information Technologies Institute ITI | Markatopoulou F.,Queen Mary, University of London | Galanopoulos D.,Information Technologies Institute ITI | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
ICMR 2017 - Proceedings of the 2017 ACM International Conference on Multimedia Retrieval | Year: 2017

This paper presents a fully-automatic method that combines video concept detection and textual query analysis in order to solve the problem of ad-hoc video search. We present a set of NLP steps that cleverly analyse different parts of the query in order to convert it to related semantic concepts, we propose a new method for transforming concept-based keyframe and query representations into a common semantic embedding space, and we show that our proposed combination of concept-based representations with their corresponding semantic embeddings results to improved video search accuracy. Our experiments in the TRECVID AVS 2016 and the Video Search 2008 datasets show the effectiveness of the proposed method compared to other similar approaches. © 2017 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery.


Galanopoulos D.,Information Technologies Institute ITI | Markatopoulou F.,Information Technologies Institute ITI | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
ICMR 2017 - Proceedings of the 2017 ACM International Conference on Multimedia Retrieval | Year: 2017

Zero-example event detection is a problem where, given an event query as input but no example videos for training a detector, the system retrieves the most closely related videos. In this paper we present a fully-automatic zero-example event detection method that is based on translating the event description to a predefined set of concepts for which previously trained visual concept detectors are available. We adopt the use of Concept Language Models (CLMs), which is a method of augmenting semantic concept definition, and we propose a new concept-selection method for deciding on the appropriate number of the concepts needed to describe an event query. The proposed system achieves state-of-the-art performance in automatic zero-example event detection. © 2017 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery.


Tzelepis C.,Information Technologies Institute ITI | Tzelepis C.,Queen Mary, University of London | Ma Z.,Carnegie Mellon University | Mezaris V.,Information Technologies Institute ITI | And 5 more authors.
Image and Vision Computing | Year: 2016

Research on event-based processing and analysis of media is receiving an increasing attention from the scientific community due to its relevance for an abundance of applications, from consumer video management and video surveillance to lifelogging and social media. Events have the ability to semantically encode relationships of different informational modalities, such as visual-audio-text, time, involved agents and objects, with the spatio-temporal component of events being a key feature for contextual analysis. This unveils an enormous potential for exploiting new information sources and opening new research directions. In this paper, we survey the existing literature in this field. We extensively review the employed conceptualization of the notion of event in multimedia, the techniques for event representation and modeling, the feature representation and event inference approaches for the problems of event detection in audio, visual, and textual content. Furthermore, we review some key event-based multimedia applications, and various benchmarking activities that provide solid frameworks for measuring the performance of different event processing and analysis systems. We provide an in-depth discussion of the insights obtained from reviewing the literature and identify future directions and challenges. © 2016 Elsevier B.V.


Michailidis I.T.,Information Technologies Institute Iti | Baldi S.,Information Technologies Institute Iti | Baldi S.,Technical University of Delft | Pichler M.F.,University of Graz | And 2 more authors.
Applied Energy | Year: 2015

Energy efficient passive designs and constructions have been extensively studied in the last decades as a way to improve the ability of a building to store thermal energy, increase its thermal mass, increase passive insulation and reduce heat losses. However, many studies show that passive thermal designs alone are not enough to fully exploit the potential for energy efficiency in buildings: in fact, harmonizing the active elements for indoor thermal comfort with the passive design of the building can lead to further improvements in both energy efficiency and comfort. These improvements can be achieved via the design of appropriate Building Optimization and Control (BOC) systems, a task which is more complex in high-inertia buildings than in conventional ones. This is because high thermal mass implies a high memory, so that wrong control decisions will have negative repercussions over long time horizons. The design of proactive control strategies with the capability of acting in advance of a future situation, rather than just reacting to current conditions, is of crucial importance for a full exploitation of the capabilities of a high-inertia building. This paper applies a simulation-assisted control methodology to a high-inertia building in Kassel, Germany. A simulation model of the building is used to proactively optimize, using both current and future information about the external weather condition and the building state, a combined criterion composed of the energy consumption and the thermal comfort index. Both extensive simulation as well as real-life experiments performed during the unstable German wintertime, demonstrate that the proposed approach can effectively deal with the complex dynamics arising from the high-inertia structure, providing proactive and intelligent decisions that no currently employed rule-based strategy can replicate. © 2015 Elsevier Ltd.


Tzelepis C.,Information Technologies Institute ITI | Tzelepis C.,Queen Mary, University of London | Galanopoulos D.,Information Technologies Institute ITI | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
Image and Vision Computing | Year: 2015

In this work we deal with the problem of high-level event detection in video. Specifically, we study the challenging problems of i) learning to detect video events from solely a textual description of the event, without using any positive video examples, and ii) additionally exploiting very few positive training samples together with a small number of "related" videos. For learning only from an event's textual description, we first identify a general learning framework and then study the impact of different design choices for various stages of this framework. For additionally learning from example videos, when true positive training samples are scarce, we employ an extension of the Support Vector Machine that allows us to exploit "related" event videos by automatically introducing different weights for subsets of the videos in the overall training set. Experimental evaluations performed on the large-scale TRECVID MED 2014 video dataset provide insight on the effectiveness of the proposed methods. © 2015 Elsevier B.V.


Markatopoulou F.,Information Technologies Institute ITI | Markatopoulou F.,Queen Mary, University of London | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

Concept detection for semantic annotation of video fragments (e.g. keyframes) is a popular and challenging problem. A variety of visual features is typically extracted and combined in order to learn the relation between feature-based keyframe representations and semantic concepts. In recent years the available pool of features has increased rapidly, and features based on deep convolutional neural networks in combination with other visual descriptors have significantly contributed to improved concept detection accuracy. This work proposes an algorithm that dynamically selects, orders and combines many base classifiers, trained independently with different feature-based keyframe representations, in a cascade architecture for video concept detection. The proposed cascade is more accurate and computationally more efficient, in terms of classifier evaluations, than state-of-the-art classifier combination approaches. © Springer International Publishing Switzerland 2016.


Tzelepis C.,Information Technologies Institute ITI | Tzelepis C.,Queen Mary, University of London | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2016

In this paper, we propose an algorithm that learns from uncertain data and exploits related videos for the problem of event detection; related videos are those that are closely associated, though not fully depicting the event of interest. In particular, two extensions of the linear SVM with Gaussian Sample Uncertainty are presented, which (a) lead to non-linear decision boundaries and (b) incorporate related class samples in the optimization problem. The resulting learning methods are especially useful in problems where only a limited number of positive and related training observations are provided, e.g., for the 10Ex subtask of TRECVID MED, where only ten positive and five related samples are provided for the training of a complex event detector. Experimental results on the TRECVID MED 2014 dataset verify the effectiveness of the proposed methods. © Springer International Publishing Switzerland 2016.


Markatopoulou F.,Information Technologies Institute ITI | Markatopoulou F.,Queen Mary, University of London | Pittaras N.,Information Technologies Institute ITI | Papadopoulou O.,Information Technologies Institute ITI | And 2 more authors.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2015

In this work we deal with the problem of how different local descriptors can be extended, used and combined for improving the effectiveness of video concept detection. The main contributions of this work are: 1) We examine how effectively a binary local descriptor, namely ORB, which was originally proposed for similarity matching between local image patches, can be used in the task of video concept detection. 2) Based on a previously proposed paradigm for introducing color extensions of SIFT, we define in the same way color extensions for two other non-binary or binary local descriptors (SURF, ORB), and we experimentally show that this is a generally applicable paradigm. 3) In order to enable the efficient use and combination of these color extensions within a state-of-the-art concept detection methodology (VLAD), we study and compare two possible approaches for reducing the color descriptor’s dimensionality using PCA. We evaluate the proposed techniques on the dataset of the 2013 Semantic Indexing Task of TRECVID. © Springer International Publishing Switzerland 2015.


Markatopoulou F.,Information Technologies Institute ITI | Markatopoulou F.,Queen Mary, University of London | Mezaris V.,Information Technologies Institute ITI | Patras I.,Queen Mary, University of London
Proceedings - International Conference on Image Processing, ICIP | Year: 2015

In this paper we propose a cascade architecture that can be used to train and combine different visual descriptors (local binary, local non-binary and Deep Convolutional Neural Network-based) for video concept detection. The proposed architecture is computationally more efficient than typical state-of-the-art video concept detection systems, without affecting the detection accuracy. In addition, this work presents a detailed study on combining descriptors based on Deep Convolutional Neural Networks with other popular local descriptors, both within a cascade and when using different late-fusion schemes. We evaluate our methods on the extensive video dataset of the 2013 TRECVID Semantic Indexing Task. © 2015 IEEE.

Loading Information Technologies Institute ITI collaborators
Loading Information Technologies Institute ITI collaborators