Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions
other
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system evaluation and emotion annotation that one is most likely to encounter in emotion recognition research. Further, we identify some of the possible applications for emotion recognition such as health monitoring or e-learning systems. Finally, we will discuss the growing need for developing agreed standards in automatic emotion recognition research.
TNO Identifier
19216
Source title
12th International Conference, HCI International (HCII) 2007, Beijing, China, July 22-27
Files
To receive the publication files, please send an e-mail request to TNO Repository.