The 2016 speakers in thewild speaker recognition evaluation
The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 s...
Guardado en:
Publicado: |
2016
|
---|---|
Materias: | |
Acceso en línea: | https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_2308457X_v08-12-September-2016_n_p823_McLaren http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren |
Aporte de: |
id |
paper:paper_2308457X_v08-12-September-2016_n_p823_McLaren |
---|---|
record_format |
dspace |
spelling |
paper:paper_2308457X_v08-12-September-2016_n_p823_McLaren2023-06-08T16:35:31Z The 2016 speakers in thewild speaker recognition evaluation Evaluation Speaker recognition Speakers in the wild database Audio recordings Character recognition Database systems Speech communication Speech processing Acoustic conditions Evaluation Evaluation results Future research directions International team Speaker recognition Speaker recognition evaluations Text independents Speech recognition The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 sessions per speaker. The recordings contain unconstrained or "wild" acoustic conditions, rarely found in large speaker recognition datasets, and multi-speaker recordings for both speaker enrollment and verification. This article provides details of the SITW speaker recognition challenge and analysis of evaluation results. There were 25 international teams involved in the challenge of which 11 teams participated in an evaluation track. Teams were tasked with applying existing and novel speaker recognition algorithms to the challenges associated with the real world conditions of SITW. We provide an analysis of some of the top performing systems submitted during the evaluation and provide future research directions. Copyright ©2016 ISCA. 2016 https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_2308457X_v08-12-September-2016_n_p823_McLaren http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren |
institution |
Universidad de Buenos Aires |
institution_str |
I-28 |
repository_str |
R-134 |
collection |
Biblioteca Digital - Facultad de Ciencias Exactas y Naturales (UBA) |
topic |
Evaluation Speaker recognition Speakers in the wild database Audio recordings Character recognition Database systems Speech communication Speech processing Acoustic conditions Evaluation Evaluation results Future research directions International team Speaker recognition Speaker recognition evaluations Text independents Speech recognition |
spellingShingle |
Evaluation Speaker recognition Speakers in the wild database Audio recordings Character recognition Database systems Speech communication Speech processing Acoustic conditions Evaluation Evaluation results Future research directions International team Speaker recognition Speaker recognition evaluations Text independents Speech recognition The 2016 speakers in thewild speaker recognition evaluation |
topic_facet |
Evaluation Speaker recognition Speakers in the wild database Audio recordings Character recognition Database systems Speech communication Speech processing Acoustic conditions Evaluation Evaluation results Future research directions International team Speaker recognition Speaker recognition evaluations Text independents Speech recognition |
description |
The newly collected Speakers in the Wild (SITW) database was central to a text-independent speaker recognition challenge held as part of a special session at Interspeech 2016. The SITW database is composed of audio recordings from 299 speakers collected from open source media, with an average of 8 sessions per speaker. The recordings contain unconstrained or "wild" acoustic conditions, rarely found in large speaker recognition datasets, and multi-speaker recordings for both speaker enrollment and verification. This article provides details of the SITW speaker recognition challenge and analysis of evaluation results. There were 25 international teams involved in the challenge of which 11 teams participated in an evaluation track. Teams were tasked with applying existing and novel speaker recognition algorithms to the challenges associated with the real world conditions of SITW. We provide an analysis of some of the top performing systems submitted during the evaluation and provide future research directions. Copyright ©2016 ISCA. |
title |
The 2016 speakers in thewild speaker recognition evaluation |
title_short |
The 2016 speakers in thewild speaker recognition evaluation |
title_full |
The 2016 speakers in thewild speaker recognition evaluation |
title_fullStr |
The 2016 speakers in thewild speaker recognition evaluation |
title_full_unstemmed |
The 2016 speakers in thewild speaker recognition evaluation |
title_sort |
2016 speakers in thewild speaker recognition evaluation |
publishDate |
2016 |
url |
https://bibliotecadigital.exactas.uba.ar/collection/paper/document/paper_2308457X_v08-12-September-2016_n_p823_McLaren http://hdl.handle.net/20.500.12110/paper_2308457X_v08-12-September-2016_n_p823_McLaren |
_version_ |
1768542056792195072 |