Title
Arousal and Valence prediction in spontaneous emotional speech: Felt versus perceived emotion
Author
Truong, K.P.
van Leeuwen, D.A.
Neerincx, M.A.
de Jong, F.M.G.
TNO Defensie en Veiligheid
Publication year
2009
Abstract
In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the TNO-GAMING corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out. Index Terms: emotion, emotional speech database, emotion recognition
Subject
Acoustics and Audiology
automatic speech recognition
speech
emotion
To reference this document use:
http://resolver.tudelft.nl/uuid:fde14973-9847-4f67-8750-0bdd16265232
TNO identifier
93998
Source
Interspeech 2009, september, Brighton, UK.
Document type
conference paper