On the class distribution labelling step sensitivity of co-training

Co-training can learn from datasets having a small number of labelled examples and a large number of unlabelled ones. It is an iterative algorithm where examples labelled in previous iterations are used to improve the classification of examples from the unlabelled set. However, as the number of ini...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Matsubara, Edson T., Monard, Maria C., Prati, Ronaldo
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 2006
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/23900
Aporte de:
Descripción
Sumario:Co-training can learn from datasets having a small number of labelled examples and a large number of unlabelled ones. It is an iterative algorithm where examples labelled in previous iterations are used to improve the classification of examples from the unlabelled set. However, as the number of initial labelled examples is often small we do not have reliable estimates regarding the underlying population which generated the data. In this work we make the claim that the proportion in which examples are labelled is a key parameter to co-training. Furthermore, we have done a series of experiments to investigate how the proportion in which we label examples in each step influences cotraining performance. Results show that co-training should be used with care in challenging domains.