Affective rating of audio and video clips using the EmojiGrid [version 2; peer review: 1 approved, 1 approved with reservations]

article
Background: In this study we measured the affective appraisal of sounds and video clips using a newly developed graphical self-report tool: the EmojiGrid. The EmojiGrid is a square grid, labeled with emoji that express different degrees of valence and arousal. Users rate the valence and arousal of a given stimulus by simply clicking on the grid. Methods: In Experiment I, observers (N=150, 74 males, mean age=25.2±3.5) used the EmojiGrid to rate their affective appraisal of 77 validated sound clips from nine different semantic categories, covering a large area of the affective space. In Experiment II, observers (N=60, 32 males, mean age=24.5±3.3) used the EmojiGrid to rate their affective appraisal of 50 validated film fragments varying in positive and negative affect (20 positive, 20 negative, 10 neutral). Results: The results of this study show that for both sound and video, the agreement between the mean ratings obtained with the EmojiGrid and those obtained with an alternative and validated affective rating tool in previous studies in the literature, is excellent for valence and good for arousal. Our results also show the typical universal U-shaped relation between mean valence and arousal that is commonly observed for affective sensory stimuli, both for sound and video. Conclusions: We conclude that the EmojiGrid can be used as an affective self-report tool for the assessment of sound and video-evoked emotions.

REVISED Amendments from Version 1
We added a concise review of the literature about the emotional affordances of emoji to the Introduction section. In the Data Analysis section, we now explain how the EmojiGrid data were scaled. The graphs in the Results section now represent datapoints by the identifiers of the corresponding stimuli, to allow the visual assessment, comparison and verification of the emotions induced by the different affective stimuli. We also added correlation plots for the mean valence and arousal ratings obtained both with the SAM and EmojiGrid to enable a direct comparison within both of these affective dimensions. In addition, we uploaded a new set of Excel notebooks to the Open Science Framework that include all graphs, together with a brief description of the nature and content of all stimuli, their original affective classification, and their mean valence and arousal values (1) as provided by the authors of the (sound and video) databases and (2) as measured in this study. We extended the Discussion section with some limitations of this study, such as ways to measure mixed emotions, and the fact that the comparison of the SAM and EmojiGrid ratings were based on ratings from different populations.
TNO Identifier
955711
Source
F1000 Research, 9(970)