Improving network generalization through selection of examples

In this work, we study how the selection of examples affects the learning procedure in a neural network and its relationship with the complexity of the function under study and its architecture. We focus on three different problems: parity, addition of two number and bitshifting implemented on feed...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Cannas, Sergio A., Franco, Leonardo
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 1998
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/24820
Aporte de:
Descripción
Sumario:In this work, we study how the selection of examples affects the learning procedure in a neural network and its relationship with the complexity of the function under study and its architecture. We focus on three different problems: parity, addition of two number and bitshifting implemented on feed-forward Neural Networks. For the parity problem, one of the most used problems for testing learning algorithms, we obtain the result that only the use of the whole set of examples assures global learnings. For the other two functions we show that generalization can be considerably improved with a particular selection of examples instead of a random one.