Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA
Convolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have emerged as a promising option,...
Guardado en:
| Autores principales: | , , |
|---|---|
| Formato: | Artículo publishedVersion |
| Lenguaje: | Español |
| Publicado: |
FIUBA
2024
|
| Materias: | |
| Acceso en línea: | https://elektron.fi.uba.ar/elektron/article/view/200 https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=200_oai |
| Aporte de: |
| id |
I28-R145-200_oai |
|---|---|
| record_format |
dspace |
| spelling |
I28-R145-200_oai2026-02-11 Urbano Pintos, Nicolás Lacomi, Héctor Lavorato, Mario 2024-12-15 Convolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have emerged as a promising option, thanks to their low latency and high energy efficiency. Vitis AI and FINN are two development environments that automate the implementation of CNNs on FPGAs. Vitis AI uses a deep learning processing unit (DPU) and memory accelerators, while FINN is based on a streaming architecture and fine-tunes parallelization. Both environments implement parameter quantization techniques to reduce memory usage. This work extends previous comparisons by evaluating both environments by implementing four models with different numbers of layers on the Xilinx Kria KV260 FPGA platform. The complete process from training to evaluation on FPGA, including quantization and hardware implementation, is described in detail. The results show that FINN provides lower latency, higher throughput, and better energy efficiency than Vitis AI. However, Vitis AI stands out for its simplicity in model training and ease of implementation on FPGA. The main finding of this study is that as the complexity of the models increases (with more layers in the neural networks), the differences in terms of performance and energy efficiency between FINN and Vitis AI are significantly reduced. Las redes neuronales convolucionales (CNN) son esenciales para la clasificación y detección de imágenes, y su implementación en sistemas embebidos resulta cada vez más atractiva debido a su tamaño compacto y bajo consumo energético. Los FPGA (Field-Programmable Gate Arrays) han surgido como una opción prometedora, gracias a su baja latencia y alta eficiencia energética. Vitis AI y FINN son dos entornos de desarrollo que automatizan la implementación de CNN en FPGA. Vitis AI utiliza una unidad de procesamiento de aprendizaje profundo (DPU) y aceleradores de memoria, mientras que FINN se basa en una arquitectura de transmisión de datos (streaming) y ajusta la paralelización. Ambos entornos implementan técnicas de cuantización de parámetros para reducir el uso de memoria. Este trabajo extiende comparaciones previas al evaluar ambos entornos mediante la implementación de cuatro modelos con diferentes cantidades de capas en la plataforma FPGA Kria KV260 de Xilinx. Se describe en detalle el proceso completo, desde el entrenamiento hasta la evaluación en FPGA, incluyendo la cuantización y la implementación en hardware. Los resultados muestran que FINN proporciona menor latencia, mayor rendimiento y mejor eficiencia energética que Vitis AI. No obstante, Vitis AI destaca por su simplicidad en el entrenamiento de modelos y facilidad de implementación en FPGA. El hallazgo principal del estudio es que, al aumentar la complejidad de los modelos con más capas, las diferencias de rendimiento y eficiencia energética entre FINN y Vitis AI se reducen notablemente. application/pdf text/html https://elektron.fi.uba.ar/elektron/article/view/200 10.37537/rev.elektron.8.2.200.2024 spa FIUBA https://elektron.fi.uba.ar/elektron/article/view/200/357 https://elektron.fi.uba.ar/elektron/article/view/200/369 Derechos de autor 2024 Nicolás Urbano Pintos Elektron Journal; Vol. 8 No. 2 (2024); 61-70 Revista Elektron; Vol. 8 Núm. 2 (2024); 61-70 Revista Elektron; v. 8 n. 2 (2024); 61-70 2525-0159 2525-0159 FPGA CNN FINN Vitis-AI Quantization FPGA CNN FINN Vitis-AI Cuantización Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA Comparación de Vitis-AI y FINN para implementar redes neuronales convolucionales en FPGA info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=200_oai |
| institution |
Universidad de Buenos Aires |
| institution_str |
I-28 |
| repository_str |
R-145 |
| collection |
Repositorio Digital de la Universidad de Buenos Aires (UBA) |
| language |
Español |
| orig_language_str_mv |
spa |
| topic |
FPGA CNN FINN Vitis-AI Quantization FPGA CNN FINN Vitis-AI Cuantización |
| spellingShingle |
FPGA CNN FINN Vitis-AI Quantization FPGA CNN FINN Vitis-AI Cuantización Urbano Pintos, Nicolás Lacomi, Héctor Lavorato, Mario Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| topic_facet |
FPGA CNN FINN Vitis-AI Quantization FPGA CNN FINN Vitis-AI Cuantización |
| description |
Convolutional neural networks (CNNs) are essential for image classification and detection, and their implementation in embedded systems is becoming increasingly attractive due to their compact size and low power consumption. Field-Programmable Gate Arrays (FPGAs) have emerged as a promising option, thanks to their low latency and high energy efficiency. Vitis AI and FINN are two development environments that automate the implementation of CNNs on FPGAs. Vitis AI uses a deep learning processing unit (DPU) and memory accelerators, while FINN is based on a streaming architecture and fine-tunes parallelization. Both environments implement parameter quantization techniques to reduce memory usage. This work extends previous comparisons by evaluating both environments by implementing four models with different numbers of layers on the Xilinx Kria KV260 FPGA platform. The complete process from training to evaluation on FPGA, including quantization and hardware implementation, is described in detail. The results show that FINN provides lower latency, higher throughput, and better energy efficiency than Vitis AI. However, Vitis AI stands out for its simplicity in model training and ease of implementation on FPGA. The main finding of this study is that as the complexity of the models increases (with more layers in the neural networks), the differences in terms of performance and energy efficiency between FINN and Vitis AI are significantly reduced. |
| format |
Artículo publishedVersion |
| author |
Urbano Pintos, Nicolás Lacomi, Héctor Lavorato, Mario |
| author_facet |
Urbano Pintos, Nicolás Lacomi, Héctor Lavorato, Mario |
| author_sort |
Urbano Pintos, Nicolás |
| title |
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| title_short |
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| title_full |
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| title_fullStr |
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| title_full_unstemmed |
Comparison of Vitis-AI and FINN for implementing convolutional neural networks on FPGA |
| title_sort |
comparison of vitis-ai and finn for implementing convolutional neural networks on fpga |
| publisher |
FIUBA |
| publishDate |
2024 |
| url |
https://elektron.fi.uba.ar/elektron/article/view/200 https://repositoriouba.sisbi.uba.ar/gsdl/cgi-bin/library.cgi?a=d&c=elektron&d=200_oai |
| work_keys_str_mv |
AT urbanopintosnicolas comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga AT lacomihector comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga AT lavoratomario comparisonofvitisaiandfinnforimplementingconvolutionalneuralnetworksonfpga AT urbanopintosnicolas comparaciondevitisaiyfinnparaimplementarredesneuronalesconvolucionalesenfpga AT lacomihector comparaciondevitisaiyfinnparaimplementarredesneuronalesconvolucionalesenfpga AT lavoratomario comparaciondevitisaiyfinnparaimplementarredesneuronalesconvolucionalesenfpga |
| _version_ |
1859522237987553280 |