A low communication overhead parallel implementation of the back-propagation algorithm

The back-propagation algorithm is one of the most widely used training algorithms for neural networks. The training phase of a multilayer perceptron by using this algorithm can take very long time making neural networks difficult to accept. One approach to solve this problem consists in the parallel...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Alfonso, Marcelo, Kavka, Carlos, Printista, Alicia Marcela
Formato: Objeto de conferencia
Lenguaje:Inglés
Publicado: 2000
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/23442
Aporte de:
id I19-R120-10915-23442
record_format dspace
institution Universidad Nacional de La Plata
institution_str I-19
repository_str R-120
collection SEDICI (UNLP)
language Inglés
topic Ciencias Informáticas
Neural nets
Parallel
spellingShingle Ciencias Informáticas
Neural nets
Parallel
Alfonso, Marcelo
Kavka, Carlos
Printista, Alicia Marcela
A low communication overhead parallel implementation of the back-propagation algorithm
topic_facet Ciencias Informáticas
Neural nets
Parallel
description The back-propagation algorithm is one of the most widely used training algorithms for neural networks. The training phase of a multilayer perceptron by using this algorithm can take very long time making neural networks difficult to accept. One approach to solve this problem consists in the parallelization of the training algorithm. There exists many different approaches, however most of them are well adapted to specialized hardware. The idea to use a network of workstations as a general purpose parallel computer is widely accepted. However, the communication overhead imposes restrictions in the design of parallel algorithms. In this work, we propose a parallel implementation of the back-propagation algorithm that is suitable to be applied to a network of workstations. The objective is twofold. The first goal is to increment the performance of the training phase of the algorithm with low communication overhead. The second goal is to provide a dynamic assignment of tasks to processors in order to make the best use of the computational resources.
format Objeto de conferencia
Objeto de conferencia
author Alfonso, Marcelo
Kavka, Carlos
Printista, Alicia Marcela
author_facet Alfonso, Marcelo
Kavka, Carlos
Printista, Alicia Marcela
author_sort Alfonso, Marcelo
title A low communication overhead parallel implementation of the back-propagation algorithm
title_short A low communication overhead parallel implementation of the back-propagation algorithm
title_full A low communication overhead parallel implementation of the back-propagation algorithm
title_fullStr A low communication overhead parallel implementation of the back-propagation algorithm
title_full_unstemmed A low communication overhead parallel implementation of the back-propagation algorithm
title_sort low communication overhead parallel implementation of the back-propagation algorithm
publishDate 2000
url http://sedici.unlp.edu.ar/handle/10915/23442
work_keys_str_mv AT alfonsomarcelo alowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
AT kavkacarlos alowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
AT printistaaliciamarcela alowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
AT alfonsomarcelo lowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
AT kavkacarlos lowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
AT printistaaliciamarcela lowcommunicationoverheadparallelimplementationofthebackpropagationalgorithm
bdutipo_str Repositorios
_version_ 1764820465917886464