Time filter

Source Type


Esposito A.,The Second University of Naples | Capuano V.,The Second University of Naples | Mekyska J.,Brno University of Technology | Faundez-Zanuy M.,EUP Mataro
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2012

This work defines a procedure for collecting naturally induced emotional facial expressions through the vision of movie excerpts with high emotional contents and reports experimental data ascertaining the effects of emotions on memory word recognition tasks. The induced emotional states include the four basic emotions of sadness, disgust, happiness, and surprise, as well as the neutral emotional state. The resulting database contains both thermal and visible emotional facial expressions, portrayed by forty Italian subjects and simultaneously acquired by appropriately synchronizing a thermal and a standard visible camera. Each subject's recording session lasted 45 minutes, allowing for each mode (thermal or visible) to collect a minimum of 2000 facial expressions from which a minimum of 400 were selected as highly expressive of each emotion category. The database is available to the scientific community and can be obtained contacting one of the authors. For this pilot study, it was found that emotions and/or emotion categories do not affect individual performance on memory word recognition tasks and temperature changes in the face or in some regions of it do not discriminate among emotional states. © 2012 Springer-Verlag.

Espinosa-Duro V.,EUP Mataro | Faundez-Zanuy M.,EUP Mataro | Mekyska J.,Brno University of Technology
Cognitive Computation | Year: 2011

Although audio-visual human systems have several well-known limitations, artificial sensors can measure information beyond our limits. What would happen if we were able to overcome our limitations? Would we be able to obtain a better knowledge of our environment? Or the information beyond our limits is redundant? In this paper, we compare infrared, thermal and visible images from an information theory point of view. We have acquired a small database and compared several measurements over these images. While infrasounds and ultrasounds are not directly applicable, for instance, to speaker recognition due to the impossibility of human beings generating sounds in these frequencies, this is not the case with image signals beyond the visible spectrum for face recognition. We have observed that visible, near-infrared and thermal images contain a small amount of redundancy (less than 1,55 bits). © 2010 Springer Science+Business Media, LLC.

Sesa-Nogueras E.,EUP Mataro | Faundez-Zanuy M.,EUP Mataro | Mekyska J.,Brno University of Technology
Cognitive Computation | Year: 2012

This paper is aimed at analysing, from an information theory perspective, the gestures produced by human beings when handwriting a text. Modern capturing devices allow the gathering of data not only from the on-surface movements of the hand, but also from the in-air trajectories performed when the hand moves in the air from one stroke to the next. Our past research with isolated uppercase words clearly suggests that both types of trajectories have a biometric potential to perform writer recognition and that they can be effectively combined to enhance the recognition accuracy. With samples from the BiosecurID database, we have analysed the entropy of each kind of trajectories, as well as the amount of information they share, and the difference between intra- and inter-writer measures of the mutual information. The results show that when pressure is not taken into account, the amount of information is similar in both types of trajectories. Furthermore, even if they share some information, in-air and on-surface trajectories appear to be notably non-redundant. © 2011 Springer Science+Business Media, LLC.

Faundez-Zanuy M.,EUP Mataro | Mekyska J.,Brno University of Technology | Espinosa-Duro V.,EUP Mataro
Pattern Recognition Letters | Year: 2011

In this paper we present a new thermographic image database, suitable for the analysis of automatic focusing measures. This database contains the images of 10 scenes, each of which is represented once for each of 96 different focus positions. Using this database, we evaluate the usefulness of five focus measures with the goal of determining the optimal focus position. Experimental results reveal that the accurate automatic detection of optimal focus position can be achieved with a low computational burden. We also present an acquisition tool for obtaining thermal images. To the best of our knowledge, this is the first study on the automatic focusing of thermal images. © 2011 Elsevier B.V. All rights reserved.

Faundez-Zanuy M.,EUP Mataro | Mekyska J.,Brno University of Technology | Font-Aragones X.,EUP Mataro
Cognitive Computation | Year: 2014

In this paper, we present a new hand database called Tecnocampus Hand Image Database that includes right hand, palm and dorsal images. All the images have been acquired with three different sensors (visible, near-infrared and thermal). This database consists of 100 people acquired in five different acquisition sessions, two images per session and palm/dorsal sides. The total amount of pictures is 6.000, and it is mainly developed for hand image biometric recognition purposes. In addition, the database has been studied from the information theory point of view, and we found that this highest level of information is achieved in thermal spectrum. Furthermore, a low level of mutual information between different spectrums is also demonstrated. This opens an interesting research field in multi-sensor data fusion. © 2013 Springer Science+Business Media New York.

Discover hidden collaborations