Addressing multimodality in overt aggression detection
conference paper
Automatic detection of aggressive situations has a high societal and scientific relevance. It has been argued that using data from multimodal sensors as for example video and sound as opposed to unimodal is bound to increase the accuracy of detections. We approach the problem of multimodal aggression detection from the viewpoint of a human observer and try to reproduce his predictions automatically. Typically, a single ground truth for all available modalities is used when training recognizers. We explore the benefits of adding an extra level of annotations, namely audio-only and video-only. We analyze these annotations and compare them to the multimodal case in order to have more insight into how humans reason using multimodal data. We train classifiers and compare the results when using unimodal and multimodal labels as ground truth. Both in the case of audio and video recognizer the performance increases when using the unimodal labels. © 2011 Springer-Verlag.
TNO Identifier
435968
ISSN
03029743
ISBN
9783642235375
Publisher
Springer
Source title
14th International Conference on Text, Speech and Dialogue, TSD 2011, 1-5 September 2011, Pilsen
Place of publication
Berlin : [etc]
Pages
25-32
Files
To receive the publication files, please send an e-mail request to TNO Repository.