A Comparison of Human and Machine Learning-based Accuracy for Valence Classification of Subjects in Video Fragments
conference paper
Facial expressions are the primary way to show one’s emotional state. Automatic recognition of these cues from video using software allows for various improvements in human-computer interaction, ranging from improved feedback for recommender systems to automatic labeling of movies according to the emotions they induce. A number of affective display databases have been created to aid development in this field. These datasets are frequently available for academic use [1, 2, 3], use picture or video stimuli and range from highly controlled [1, 2] to more natural settings [3]. We observe that methods using these datasets report accuracy figures that leave room for improvement [5].
TNO Identifier
513946
Source title
Measuring Behavior 2014 - 9th International Conference on Methods and Techniques in Behavioral Research, 27-29 August 2014, Wageningen, The Netherlands
Editor(s)
Spink, A.J.
Loijens, L.W.S.
Woloszynowska-Fraser, M.
Noldus, L.P.J.J.
Loijens, L.W.S.
Woloszynowska-Fraser, M.
Noldus, L.P.J.J.