Multimodal subjectivity analysis of multiparty conversation
conference paper
We investigate the combination of several sources of information for the purpose of subjectivity recognition and polarity classification in meetings. We focus on features from two modalities, transcribed words and acoustics, and we compare the performance of three different textual representations: words, characters, and phonemes. Our experiments show that character-level features outperform word-level features for these tasks, and that a careful fusion of all features yields the best performance. © 2008 Association for Computational Linguistics.
TNO Identifier
436606
Source title
2008 Conference on Empirical Methods in Natural Language Processing, EMNLP 2008, Co-located with AMTA 2008 and the International Workshop on Spoken Language Translation, 25-27 October 2008, Honolulu, HI, USA
Pages
466-474
Files
To receive the publication files, please send an e-mail request to TNO Repository.