Assessing agreement of observer- and self-annotations in spontaneous multimodal emotion data

other
We investigated inter-observer agreement and the reliability of self-reported emotion ratings (i.e., self-raters judging their own emotions) in spontaneous multimodal emotion data. During a multiplayer video game, vocal and facial expressions were recorded (including the game content itself) and were annotated by the players themselves on arousal and valence scales. In a perception experiment, observers rated a small part of the data that was provided in 4 conditions: audio only, visual only, audiovisual and audiovisual plus context. Inter-observer agreements varied between 0.32 and 0.52 when the ratings were scaled. Providing multimodal information usually increased agreement. Finally, we found that the averaged agreement between the self-rater and the observers was somewhat lower than the inter-observer agreement.
Index Terms: emotion, multimodal database, inter-rater agreement, self-reported emotion
TNO Identifier
23102
Source title
INTERSPEECH 2008 - 9th Annual Conference of the International Speech Communication Association, 22 September 2008 through 26 September 2008, Brisbane, QLD
Pages
318-321
Files
To receive the publication files, please send an e-mail request to TNO Repository.