An introduction to application-independent evaluation of speaker recognition systems

conference paper
In the evaluation of speaker recognition systems - an important part of speaker classification [1], the trade-off between missed speakers and false alarms has always been an important diagnostic tool. NIST has defined the task of speaker detection with the associated Detection Cost Function (DCF) to evaluate performance, and introduced the DET-plot [2] as a diagnostic tool. Since the first evaluation in 1996, these evaluation tools have been embraced by the research community. Although it is an excellent measure, the DCF has the limitation that it has parameters that imply a particular application of the speaker detection technology. In this chapter we introduce an evaluation measure that instead averages detection performance over application types. This metric, CIIr, was first introduced in 2004 by one of the authors [3]. Here we introduce the subject with a minimum of mathematical detail, concentrating on the various interpretations of CIIr and its practical application. We will emphasize the difference between discrimination abilities of a speaker detector ('the position/shape of the DET-curve'), and the calibration of the detector ('how well was the threshold set'). If speaker detectors can be built to output well-calibrated log-likelihood-ratio scores, such detectors can be said to have an application-independent calibration. The proposed metric CIIr can properly evaluate the discrimination abilities of the log-likelihood-ratio scores, as well as the quality of the calibration. © Springer-Verlag Berlin Heidelberg 2007.
TNO Identifier
240309
ISSN
03029743
ISBN
3540741860 9783540741862
Source title
Speaker Classification I
Editor(s)
Mueller, C.
Pages
330-353
Files
To receive the publication files, please send an e-mail request to TNO Repository.