Entity

Time filter

Source Type


Raptis S.,Institute for Language and Speech Processing Athena Research Center | Karabetsos S.,Institute for Language and Speech Processing Athena Research Center | Karabetsos S.,Technological Educational Institute of Athens | Chalamandaris A.,Institute for Language and Speech Processing Athena Research Center | Tsiakoulis P.,Institute for Language and Speech Processing Athena Research Center
Journal on Multimodal User Interfaces | Year: 2015

Emotion-aware computing presents one of the key challenges in contemporary natural human interaction research in which emotional speech is an essential modality in multimodal user interfaces. Speech modality relates mainly to speech emotion and affect recognition as well as near natural expressive speech synthesis, the latter being considered as one of the next significant milestones in speech synthesis technology. A common problem to recognizing as well as to generating affective and emotional speech content is the adopted methodology on emotion analysis and modeling. This work proposes a generalized framework for annotating, analyzing and modeling expressive speech in a data-driven machine learning approach, towards building expressive text to speech synthesis systems. To this end, the framework as well as the data driven methodology is described, comprised of the techniques and approaches for acoustic analysis and expression clustering. In addition, the deployment of online experimental tools for speech perception and annotation and the description of the utilized speech data together with initial experimental results are also given, depicting the potential of the proposed framework and providing encouraging indications for further research. © 2015, OpenInterface Association. Source

Discover hidden collaborations