Towards a Malleable Tensorflow Implementation

The TensorFlow framework was designed since its inception to provide multi-thread capabilities, extended with hardware accelerator support to leverage the potential of modern architectures. The amount of parallelism in current versions of the framework can be selected at multiple levels (intra- and...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Libutti, Leandro Ariel, Igual, Francisco, Piñuel, Luis, De Giusti, Laura Cristina, Naiouf, Marcelo, Rucci, Enzo, Chichizola, Franco
Formato: Libro Capitulo de libro
Lenguaje:Inglés
Publicado: Springer 2020
Materias:
Acceso en línea:http://sedici.unlp.edu.ar/handle/10915/145222
Aporte de:
Descripción
Sumario:The TensorFlow framework was designed since its inception to provide multi-thread capabilities, extended with hardware accelerator support to leverage the potential of modern architectures. The amount of parallelism in current versions of the framework can be selected at multiple levels (intra- and inter-paralellism) under demand. However, this selection is fixed, and cannot vary during the execution of training/inference sessions. This heavily restricts the flexibility and elasticity of the framework, especially in scenarios in which multiple TensorFlow instances co-exist in a parallel architecture. In this work, we propose the necessary modifications within TensorFlow to support dynamic selection of threads, in order to provide transparent malleability to the infrastructure. Experimental results show that this approach is effective in the variation of parallelism, and paves the road towards future co-scheduling techniques for multi-TensorFlow scenarios.