A human benchmark for the NIST language recognition evaluation 2005

conference paper
In this paper we describe a human benchmark experiment for language recognition. We used the same task, data and evaluation measure as in the NIST Language Recognition Evaluation (LRE) 2005. For the primary condition of interest all 10-second trials were used in the experiment. The experiment was conducted by 38 subjects, who each processed part of the trials. For the seven-language closed set condition the human subjects obtained an average CDET of 23.1 %. This result can be compared to machine results of the 2005 submission, for instance that of Brno University of Technology, whose system scored 7.15 % at this task. A detailed statistical analysis is given of the human benchmark results. We argue that the result can best be expressed as the performance of ‘naïve subjects. © Odyssey 2008: Speaker and Language Recognition Workshop. All rights reserved.
TNO Identifier
954019
Publisher
International Speech Communication Association
Source title
Odyssey 2008: Speaker and Language Recognition Workshop, Speaker and Language Recognition Workshop, Odyssey 2008, 21 January 2008 through 24 January 2008
Files
To receive the publication files, please send an e-mail request to TNO Repository.