Random forest-like strategies for neural networks ensembles contruction

Ensemble methods show improved generalization capabilities that outperforrn those of single larners. lt is generally accepted that, for aggregation to be effective, the individual learners must be as accurate and diverse as possible. An important problem in ensemble learning is then how to find a go...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Namías, Rafael, Granitto, Pablo Miguel
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 2007
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/23485
Aporte de:
Descripción
Sumario:Ensemble methods show improved generalization capabilities that outperforrn those of single larners. lt is generally accepted that, for aggregation to be effective, the individual learners must be as accurate and diverse as possible. An important problem in ensemble learning is then how to find a good balance between these two conflicting conditions. For tree-based methods a successfill strategy was introduced by Breiman with the Random-Forest algorithm. In this work we introduce new methods for neural network ensemble construction that follow Random-Forest-like strategies to construct ensembles. Using several real and artificial regression problems, we compare onr new methods with the more typical Bagging algorithrm and with three state-of-the-art regression methods. We find that our algorithms produce very good results on several datasets. Some evidence suggest that our new methods work better on problems with several redundant or noisy inputs.