Performance of precision auto-tuned neural networks - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

Performance of precision auto-tuned neural networks

Résumé

While often used in embedded systems, neural networks can be costly in terms of memory and execution time. Reducing the precision used in neural networks can be beneficial in terms of performance and energy consumption. After having applied a floating-point auto-tuning tool, PROMISE, on various neural networks, we obtained versions using lower precision while keeping a required accuracy on the results. In this article, we present results regarding the memory and computation time gains obtained thanks to reduced precision, using vectorized and non-vectorized code. We also show the impact on the execution time of PROMISE of the parallelization of the Delta Debug algorithm it implements.
Fichier principal
Vignette du fichier
POAT_2023.pdf (251.96 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04149501 , version 1 (03-07-2023)

Identifiants

  • HAL Id : hal-04149501 , version 1

Citer

Quentin Ferro, Stef Graillat, Thibault Hilaire, Fabienne Jézéquel. Performance of precision auto-tuned neural networks. MCSoC 2023 (16th IEEE International Symposium on Embedded Multicore/Manycore Systems-on-Chip), special session POAT (Performance Optimization and Auto-Tuning of Software on Multicore/Manycore Systems), Dec 2023, Singapore, Singapore. ⟨hal-04149501⟩
49 Consultations
56 Téléchargements

Partager

Gmail Facebook X LinkedIn More