The 2016 speakers in thewild speaker recognition evaluation

The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 s...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: McLaren, M., Ferrer, L., Castan, D., Lawson, A., Morgan N., Georgiou P., Narayanan S., Metze F., Amazon Alexa; Apple; eBay; et al.; Google; Microsoft
Formato: CONF
Materias:
Acceso en línea:http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren
Aporte de:
id todo:paper_2308457X_v08-12-September-2016_n_p823_McLaren
record_format dspace
spelling todo:paper_2308457X_v08-12-September-2016_n_p823_McLaren2023-10-03T16:40:52Z The 2016 speakers in thewild speaker recognition evaluation McLaren, M. Ferrer, L. Castan, D. Lawson, A. Morgan N. Georgiou P. Morgan N. Narayanan S. Metze F. Amazon Alexa; Apple; eBay; et al.; Google; Microsoft Evaluation Speaker recognition Speakers in the wild database Audio recordings Character recognition Database systems Speech communication Speech processing Acoustic conditions Evaluation Evaluation results Future research directions International team Speaker recognition Speaker recognition evaluations Text independents Speech recognition The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 sessions per speaker. The recordings contain unconstrained or "wild" acoustic conditions, rarely found in large speaker recognition datasets, and multi-speaker recordings for both speaker enrollment and verification. This article provides details of the SITW speaker recognition challenge and analysis of evaluation results. There were 25 international teams involved in the challenge of which 11 teams participated in an evaluation track. Teams were tasked with applying existing and novel speaker recognition algorithms to the challenges associated with the real world conditions of SITW. We provide an analysis of some of the top performing systems submitted during the evaluation and provide future research directions. Copyright ©2016 ISCA. CONF info:eu-repo/semantics/openAccess http://creativecommons.org/licenses/by/2.5/ar http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren
institution Universidad de Buenos Aires
institution_str I-28
repository_str R-134
collection Biblioteca Digital - Facultad de Ciencias Exactas y Naturales (UBA)
topic Evaluation
Speaker recognition
Speakers in the wild database
Audio recordings
Character recognition
Database systems
Speech communication
Speech processing
Acoustic conditions
Evaluation
Evaluation results
Future research directions
International team
Speaker recognition
Speaker recognition evaluations
Text independents
Speech recognition
spellingShingle Evaluation
Speaker recognition
Speakers in the wild database
Audio recordings
Character recognition
Database systems
Speech communication
Speech processing
Acoustic conditions
Evaluation
Evaluation results
Future research directions
International team
Speaker recognition
Speaker recognition evaluations
Text independents
Speech recognition
McLaren, M.
Ferrer, L.
Castan, D.
Lawson, A.
Morgan N.
Georgiou P.
Morgan N.
Narayanan S.
Metze F.
Amazon Alexa; Apple; eBay; et al.; Google; Microsoft
The 2016 speakers in thewild speaker recognition evaluation
topic_facet Evaluation
Speaker recognition
Speakers in the wild database
Audio recordings
Character recognition
Database systems
Speech communication
Speech processing
Acoustic conditions
Evaluation
Evaluation results
Future research directions
International team
Speaker recognition
Speaker recognition evaluations
Text independents
Speech recognition
description The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 sessions per speaker. The recordings contain unconstrained or "wild" acoustic conditions, rarely found in large speaker recognition datasets, and multi-speaker recordings for both speaker enrollment and verification. This article provides details of the SITW speaker recognition challenge and analysis of evaluation results. There were 25 international teams involved in the challenge of which 11 teams participated in an evaluation track. Teams were tasked with applying existing and novel speaker recognition algorithms to the challenges associated with the real world conditions of SITW. We provide an analysis of some of the top performing systems submitted during the evaluation and provide future research directions. Copyright ©2016 ISCA.
format CONF
author McLaren, M.
Ferrer, L.
Castan, D.
Lawson, A.
Morgan N.
Georgiou P.
Morgan N.
Narayanan S.
Metze F.
Amazon Alexa; Apple; eBay; et al.; Google; Microsoft
author_facet McLaren, M.
Ferrer, L.
Castan, D.
Lawson, A.
Morgan N.
Georgiou P.
Morgan N.
Narayanan S.
Metze F.
Amazon Alexa; Apple; eBay; et al.; Google; Microsoft
author_sort McLaren, M.
title The 2016 speakers in thewild speaker recognition evaluation
title_short The 2016 speakers in thewild speaker recognition evaluation
title_full The 2016 speakers in thewild speaker recognition evaluation
title_fullStr The 2016 speakers in thewild speaker recognition evaluation
title_full_unstemmed The 2016 speakers in thewild speaker recognition evaluation
title_sort 2016 speakers in thewild speaker recognition evaluation
url http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren
work_keys_str_mv AT mclarenm the2016speakersinthewildspeakerrecognitionevaluation
AT ferrerl the2016speakersinthewildspeakerrecognitionevaluation
AT castand the2016speakersinthewildspeakerrecognitionevaluation
AT lawsona the2016speakersinthewildspeakerrecognitionevaluation
AT morgann the2016speakersinthewildspeakerrecognitionevaluation
AT georgioup the2016speakersinthewildspeakerrecognitionevaluation
AT morgann the2016speakersinthewildspeakerrecognitionevaluation
AT narayanans the2016speakersinthewildspeakerrecognitionevaluation
AT metzef the2016speakersinthewildspeakerrecognitionevaluation
AT amazonalexaappleebayetalgooglemicrosoft the2016speakersinthewildspeakerrecognitionevaluation
AT mclarenm 2016speakersinthewildspeakerrecognitionevaluation
AT ferrerl 2016speakersinthewildspeakerrecognitionevaluation
AT castand 2016speakersinthewildspeakerrecognitionevaluation
AT lawsona 2016speakersinthewildspeakerrecognitionevaluation
AT morgann 2016speakersinthewildspeakerrecognitionevaluation
AT georgioup 2016speakersinthewildspeakerrecognitionevaluation
AT morgann 2016speakersinthewildspeakerrecognitionevaluation
AT narayanans 2016speakersinthewildspeakerrecognitionevaluation
AT metzef 2016speakersinthewildspeakerrecognitionevaluation
AT amazonalexaappleebayetalgooglemicrosoft 2016speakersinthewildspeakerrecognitionevaluation
_version_ 1782029088650690560