Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option
The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calib...
Publicado: |
2019
|
---|---|
Materias: | |
Acceso en línea: | https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_23299290_v27_n1_p140_Ferrer http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer |
Aporte de: |
id |
paper:paper_23299290_v27_n1_p140_Ferrer |
---|---|
record_format |
dspace |
spelling |
paper:paper_23299290_v27_n1_p140_Ferrer2023-06-08T16:35:38Z Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option forensic voice comparison Speaker recognition trial-based calibration Calibration Data structures Logistics Mathematical transformations Personnel training Statistical tests Computational model Forensic voice comparisons Forensics Probabilistic interpretation Similarity metrics Speaker recognition Speaker recognition system Standard calibration Speech recognition The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions. This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trial-based calibration (TBC). TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial. In this work, we extend the TBC method, proposing: 1) a new similarity metric for selecting training data that result in significant gains over the one proposed in the original work; 2) a new option that enables the system to reject a trial when not enough matched data are available for training the calibration model; and 3) the use of regularization to improve the robustness of the calibration models trained for each trial. We test the proposed algorithms on a development set composed of several conditions and on the Federal Bureau of Investigation multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most of the conditions when matched calibration data are available for selection, and that it can reject most of the trials for which relevant calibration data are unavailable. © 2014 IEEE. 2019 https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_23299290_v27_n1_p140_Ferrer http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer |
institution |
Universidad de Buenos Aires |
institution_str |
I-28 |
repository_str |
R-134 |
collection |
Biblioteca Digital - Facultad de Ciencias Exactas y Naturales (UBA) |
topic |
forensic voice comparison Speaker recognition trial-based calibration Calibration Data structures Logistics Mathematical transformations Personnel training Statistical tests Computational model Forensic voice comparisons Forensics Probabilistic interpretation Similarity metrics Speaker recognition Speaker recognition system Standard calibration Speech recognition |
spellingShingle |
forensic voice comparison Speaker recognition trial-based calibration Calibration Data structures Logistics Mathematical transformations Personnel training Statistical tests Computational model Forensic voice comparisons Forensics Probabilistic interpretation Similarity metrics Speaker recognition Speaker recognition system Standard calibration Speech recognition Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
topic_facet |
forensic voice comparison Speaker recognition trial-based calibration Calibration Data structures Logistics Mathematical transformations Personnel training Statistical tests Computational model Forensic voice comparisons Forensics Probabilistic interpretation Similarity metrics Speaker recognition Speaker recognition system Standard calibration Speech recognition |
description |
The output scores of most of the speaker recognition systems are not directly interpretable as stand-alone values. For this reason, a calibration step is usually performed on the scores to convert them into proper likelihood ratios, which have a clear probabilistic interpretation. The standard calibration approach transforms the system scores using a linear function trained using data selected to closely match the evaluation conditions. This selection, though, is not feasible when the evaluation conditions are unknown. In previous work, we proposed a calibration approach for this scenario called trial-based calibration (TBC). TBC trains a separate calibration model for each test trial using data that is dynamically selected from a candidate training set to match the conditions of the trial. In this work, we extend the TBC method, proposing: 1) a new similarity metric for selecting training data that result in significant gains over the one proposed in the original work; 2) a new option that enables the system to reject a trial when not enough matched data are available for training the calibration model; and 3) the use of regularization to improve the robustness of the calibration models trained for each trial. We test the proposed algorithms on a development set composed of several conditions and on the Federal Bureau of Investigation multi-condition speaker recognition dataset, and we demonstrate that the proposed approach reduces calibration loss to values close to 0 for most of the conditions when matched calibration data are available for selection, and that it can reject most of the trials for which relevant calibration data are unavailable. © 2014 IEEE. |
title |
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
title_short |
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
title_full |
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
title_fullStr |
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
title_full_unstemmed |
Toward Fail-Safe Speaker Recognition: Trial-Based Calibration with a Reject Option |
title_sort |
toward fail-safe speaker recognition: trial-based calibration with a reject option |
publishDate |
2019 |
url |
https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_23299290_v27_n1_p140_Ferrer http://hdl.handle.net/20.500.12110/paper_23299290_v27_n1_p140_Ferrer |
_version_ |
1768542525079945216 |