An analysis of local and global solutions to address Big Data imbalanced classification: a case study with SMOTE preprocessing

Addressing the huge amount of data continuously generated is an important challenge in the Machine Learning field. The need to adapt the traditional techniques or create new ones is evident. To do so, distributed technologies have to be used to deal with the significant scalability constraints due t...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Basgall, María José, Hasperué, Waldo, Naiouf, Marcelo, Fernández, Alberto, Herrera, Francisco
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 2019
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/80384
Aporte de:
Descripción
Sumario:Addressing the huge amount of data continuously generated is an important challenge in the Machine Learning field. The need to adapt the traditional techniques or create new ones is evident. To do so, distributed technologies have to be used to deal with the significant scalability constraints due to the Big Data context. In many Big Data applications for classification, there are some classes that are highly underrepresented, leading to what is known as the imbalanced classification problem. In this scenario, learning algorithms are often biased towards the majority classes, treating minority ones as outliers or noise. Consequently, preprocessing techniques to balance the class distribution were developed. This can be achieved by suppressing majority instances (undersampling) or by creating minority examples (oversampling). Regarding the oversampling methods, one of the most widespread is the SMOTE algorithm, which creates artificial examples according to the neighborhood of each minority class instance. In this work, our objective is to analyze the SMOTE behavior in Big Data as a function of some key aspects such as the oversampling degree, the neighborhood value and, specially, the type of distributed design (local vs. global).