Title
Automatic discrimination between laughter and speech
Author
Truong, K.
van Leeuwen, D.
TNO Defensie en Veiligheid
Publication year
2007
Abstract
Emotions can be recognized by audible paralinguistic cues in speech. By detecting these paralinguistic cues that can consist of laughter, a trembling voice, coughs, changes in the intonation contour etc., information about the speakers state and emotion can be revealed. This paper describes the development of a gender-independent laugh detector with the aim to enable automatic emotion recognition. Different types of features (spectral, prosodic) for laughter detection were investigated using different classification techniques (Gaussian Mixture Models, Support Vector Machines, Multi Layer Perceptron) often used in language and speaker recognition. Classification experiments were carried out with short pre-segmented speech and laughter segments extracted from the ICSI Meeting Recorder Corpus (with a mean duration of approximately 2 s). Equal error rates of around 3% were obtained when tested on speaker-independent speech data. We found that a fusion between classifiers based on Gaussian Mixture Models and classifiers based on Support Vector Machines increases discriminative power. We also found that a fusion between classifiers that use spectral features and classifiers that use prosodic information usually increases the performance for discrimination between laughter and speech. Our acoustic measurements showed differences between laughter and speech in mean pitch and in the ratio of the durations of unvoiced to voiced portions, which indicate that these prosodic features are indeed useful for discrimination between laughter and speech. Keywords: Automatic detection laughter; Automatic detection emotion
Subject
Acoustics and Audiology
Automatic detection emotion
Automatic detection laughter
Acoustic waves
Error detection
Gesture recognition
Learning systems
Mathematical models
Automatic detection emotion
Automatic detection laughter
Laughter segments
Speech data
Speech analysis
emotions
speech recognition
automatic speech recognition
laughter
To reference this document use:
http://resolver.tudelft.nl/uuid:b3d277d6-efdd-4571-bbcb-71b991bdcd50
DOI
https://doi.org/10.1016/j.specom.2007.01.001
TNO identifier
16432
Source
Speech communication, 49 (2), 144-158
Document type
article