Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features
conference paper
In this study, we investigated automatic laughter segmentation in meetings. We first performed laughterspeech
discrimination experiments with traditional spectral features and subsequently used acousticphonetic features. In segmentation, we used Gaussian Mixture Models that were trained with spectral features. For the evaluation of the laughter segmentation we used time-weighted Detection Error Tradeoff curves. The results show that the acousticphonetic features perform relatively well given their sparseness. For segmentation, we believe that incorporating phonetic knowledge could lead to improvement. We will discuss possibilities for improvement
of our automatic laughter detector. Keywords: laughter detection, laughter
discrimination experiments with traditional spectral features and subsequently used acousticphonetic features. In segmentation, we used Gaussian Mixture Models that were trained with spectral features. For the evaluation of the laughter segmentation we used time-weighted Detection Error Tradeoff curves. The results show that the acousticphonetic features perform relatively well given their sparseness. For segmentation, we believe that incorporating phonetic knowledge could lead to improvement. We will discuss possibilities for improvement
of our automatic laughter detector. Keywords: laughter detection, laughter
TNO Identifier
19219
Source title
Workshop on Phonetics of Laughter
Files
To receive the publication files, please send an e-mail request to TNO Repository.