Time filter

Source Type

Likforman-Sulem L.,Orange S.A. | Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Faundez-Zanuy M.,Polytechnic University of Catalonia | Ciemencon S.,Orange S.A.
Smart Innovation, Systems and Technologies | Year: 2015

Humans produce handwriting samples in their every-day life activity. Handwriting can be collected on-line, thanks to digitizing tablets, or off-line by scanning paper sheets. One question is whether such handwriting data can tell about individual style, health or emotional state of the writer. We will try to answer this question by presenting related works conducted on on-line and off-line data. We will show that pieces of off-line handwriting may be useful for recognizing a writer or a handwriting style. We will also show that anxiety can be recognized as an emotion from online handwriting and drawings using a non parametric approach such as random forests. © Springer International Publishing Switzerland 2015.


Troncone A.,The Second University of Naples | Palumbo D.,The Second University of Naples | Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS
Smart Innovation, Systems and Technologies | Year: 2014

This study examines the effect of mood induction on the decoding of emotional vocal expressions. An adequate sample of 145 students (71 females, 74 males; mean age = 23.37 ± 2.05) was recruited at the Second University of Naples (Italy). Subjects were randomly assigned to one of three (sad, fear or neutral) emotion conditions induced by viewing short movies. The results showed a significant general decrease in the decoding accuracy in the mood induction conditions when compared to the accuracy of the participants who did not received such mood induction. Post hoc analyses revealed that recognition of emotional vocal voices conveying anger was especially impaired by mood induction conditions. No findings consistent with mood congruity theory were observed. This study contributes to emotion regulation research by showing differences in emotion decoding tasks by voices due to mood induction procedures, as already observed in studies exploiting the decoding of emotional faces. © Springer International Publishing Switzerland 2014.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Esposito A.M.,Osservatorio Vesuviano | Vogel C.,Trinity College Dublin
Pattern Recognition Letters | Year: 2015

Demand for and delivery so far of sophisticated computational instruments able to recognize, process and store relevant interactional signals, as well as interact with people, displaying suitable autonomous reactions appropriately sensitive to environmental changes, have produced great expectations in Information Communication Technology (ICT). Knowing what an appropriate continuation of an interaction is depends on detecting the addresser's register and a machine interface unable to assess differences will have difficulty managing interactions. Progress toward understanding and modeling such facets is crucial for implementing behaving Human Computer Interaction (HCI) systems that will simplify user access to future, profitable, remote and nearby social services. This paper raises new research questions via new means for socio-behavioral and emotional investigations, and suggests the gathering of new experimental data and theories across a spectrum of research concepts, in order to develop new psychological and computational approaches crucial for implementing believable and trustable HCI systems which exploit synthetic agents, robots, and sophisticated humanlike interfaces. © 2015 Elsevier B.V.


Fortunati L.,University of Udine | Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Sarrica M.,University of Rome La Sapienza | Ferrin G.,University of Udine
International Journal of Social Robotics | Year: 2015

The aim of this paper is to investigate on children’s knowledge and imaginary about robots. To do so, we administered to 704 children from 17 classes of 8 elementary and secondary schools, a survey with close and open questions about their conceptualization of robots. To carry out this study we took as point of reference the theoretical framework of social representations. The main results are that children evaluate toys, robots and human-beings as significantly different on all the characteristics considered. More than toys, robots have mechanical movements, they move, are more intelligent than toys but they do not keep company to them. By contrast, human beings are perceived by children starting from their corporeity: they eat and sleep, move by themselves, are intelligent and speak, keep eye-contact and company. However, children complain about the fact that human beings do not play with them. The imaginary about robots that children receive from media is characterized by anthropomorphic shapes, bodies and by human-like cognitions, feelings and behavior. The more examples of visual products with robots children are able to evoke, the higher they evaluate robots on all human-like characteristics (e.g. it looks into my eyes). Hence, the tension between imaginary and knowledge can be confounding because the human-like features of fictional robots are more advanced than those reachable by the factual ones. © 2015, Springer Science+Business Media Dordrecht.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Esposito A.M.,Italian National Institute of Geophysics and Volcanology
Cognitive Processing | Year: 2012

Human beings seem to be able to recognize emotions from speech very well and information communication technology aims to implement machines and agents that can do the same. However, to be able to automatically recognize affective states from speech signals, it is necessary to solve two main technological problems. The former concerns the identification of effective and efficient processing algorithms capable of capturing emotional acoustic features from speech sentences. The latter focuses on finding computational models able to classify, with an approximation as good as human listeners, a given set of emotional states. This paper will survey these topics and provide some insights for a holistic approach to the automatic analysis, recognition and synthesis of affective states. © Marta Olivetti Belardinelli and Springer-Verlag 2012.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Palumbo D.,The Second University of Naples | Troncone A.,The Second University of Naples
Smart Innovation, Systems and Technologies | Year: 2015

This research aimed to assess individual's abilities in decoding emotional vocal expressions according to attachment styles and Narrative Identities. To this aims 30 students (15 females, 15 males; mean age = 21.4 ± 2.47) were recruited at the Second University of Naples (Italy) and underwent an emotional-voice-decoding task after being tested through the "Experience in Close Relationships" (ECR) and Personality Meaning (PMQ) Questionnaire to assess their attachment styles and Narrative Identities. The results showed that Outward subjects were more accurate in decoding joy and surprise especially in the group of individuals with an Insecure attachment style, suggesting that emotional regulation dynamics and attachment parameters shape the ways individuals develop their ability to decode other emotional feelings. © Springer International Publishing Switzerland 2015.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Riviello M.T.,The Second University of Naples | Riviello M.T.,International Institute for Advanced Scientific Studies IIASS
Frontiers in Artificial Intelligence and Applications | Year: 2011

The present work reports on a set of perceptual psychological experiments aimed to portray the underlying meta-structure of affective communication, exploiting a crossmodal database of verbal and non-verbal (gaze, facial expressions, and gestures) American English video clips, for a comparative analysis of the subjective perception of emotional states dynamically presented through the visual and auditory channels, considered either singularly or in combination. The participants to the experiments are 90 Italian and 90 American English subjects. The results reveal that the amount of emotional information conveyed by the audio and visual components separately is much the same as that conveyed through the combined channels, suggesting a nonlinear processing of the emotional information and a cognitive load effect when it arrives from both the channels. In addition, and differently from previous results observed in experiments involving Italian stimuli, both the Italian and the American subjects seem to exploit more the visual than the auditory information, debating the previously suggested "language expertise hypothesis" (see [12, 13]) that favours the identification of emotional expressions in the audio when subjects are asked to label emotional stimuli in their own language because of their expertise in language-specific prosodic and paralinguistic information. It is possible that at the base of the encoding of the emotional information there is a more language specific process that is strictly related to the language and the way suprasegmental information are encoded and expressed in it. © 2011 The authors and IOS Press. All rights reserved.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Troncone A.,The Second University of Naples
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2011

This paper intends to evaluate the developmental stutters' ability to recognize emotional vocal expressions. To this aim, a group of diagnosed developmental child stutters and a fluent one are tested on the perception of 5 basic vocal emotional states (anger, fear, happiness, surprise and sadness) extracted from Italian movies and each produced by two actors and two actresses. The results show significant perceptual differences for all the emotional states under examination except for anger and call for further experimental data in assessing the ability of stutters to recognize emotional voices. The reported data are of interest for the development of improved automatic rehabilitation tools for which context specificity must be taken into account for a more friendly and effective human-machine interaction. © 2011 Springer.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Palumbo D.,The Second University of Naples | Troncone A.,The Second University of Naples
Cognitive Computation | Year: 2014

This study aimed at examining the effects of attachment style on the ability of recognizing emotions, from acoustic stimuli, in subjects with induced emotional states. An adequate sample of 145 university students (71 females and 74 males; mean age = 23.37 ± 2.05) was recruited from the Second University of Naples (Italy). The subjects, classified for attachment style into Secure, Insecure Dismissing, Insecure Preoccupied and Insecure Fearful, were randomly assigned with one of three (sad, fearful or neutral) different elicited emotional conditions induced by viewing short movies. Immediately after the mood-induction procedure, subjects undertook a vocal emotion-decoding task. Secure subjects were more accurate in recognizing surprise and sadness than the Insecure group; Fearfully attached individuals were less accurate in identifying vocal stimuli conveying sadness than the Securely and Dismissively attached groups. The recognition of vocal stimuli was not affected by induced mood conditions. No mood-congruity effects were found. The study sheds light on some of the factors underlying the emotion-recognition process. Possible explanations of the contribution of attachment style on the ability to decode vocal stimuli were discussed. © 2014, Springer Science+Business Media New York.


Esposito A.,The Second University of Naples | Esposito A.,International Institute for Advanced Scientific Studies IIASS | Riviello M.T.,The Second University of Naples | Riviello M.T.,International Institute for Advanced Scientific Studies IIASS
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | Year: 2010

This paper describes the general specifications and characteristics of the New Italian Audio and Video Emotional Database, collected to improve the COST 2102 Italian Audio and Video Emotional Database [28] and to support the research effort of the COST Action 2102: "Cross Modal Analysis of Verbal and Nonverbal Communication" (http://cost2102.cs.stir.ac.uk/). The database should allow the cross-modal analysis of audio and video recordings for defining distinctive, multi-modal emotional features, and identify emotional states from multimodal signals. Emphasis is placed on stimuli selection procedures, theoretical and practical aspects for stimuli identification, characteristics of selected stimuli and progresses in their assessment and validation. © 2010 Springer-Verlag.

Loading International Institute for Advanced Scientific Studies IIASS collaborators
Loading International Institute for Advanced Scientific Studies IIASS collaborators