High-Efficiency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue ACM Transactions on Reconfigurable Technology and Systems (TRETS) Année : 2018

High-Efficiency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression

Résumé

Although performing inference with artiicial neural networks (ANN) was until quite recently considered as essentially compute intensive, the emergence of deep neural networks coupled with the evolution of the integration technology transformed inference into a memory bound problem. This ascertainment being established, many works have lately focused on minimizing memory accesses, either by enforcing and exploiting sparsity on weights or by using few bits for representing activations and weights, so as to be able to use ANNs inference in embedded devices. In this work, we detail an architecture dedicated to inference using ternary {−1, 0, 1} weights and activations. This architecture is conngurable at design time to provide throughput vs power trade-oos to choose from. It is also generic in the sense that it uses information drawn for the target technologies (memory geometries and cost, number of available cuts, etc) to adapt at best to the FPGA resources. This allows to achieve up to 5.2k fps per Watt for classiication on a VC709 board using approximately half of the resources of the FPGA. Additional Key Words and Phrases: Ternary CNN, low power inference, hardware acceleration, FPGA ACM Reference format: Adrien Prost-Boucle, Alban Bourge, and Frédéric Pétrot. 2018. High-EEciency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression.
Fichier principal
Vignette du fichier
trets_nocopyright.pdf (1.52 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01686718 , version 1 (24-01-2018)
hal-01686718 , version 2 (07-01-2019)

Licence

CC0 - Transfert dans le Domaine Public

Identifiants

Citer

Adrien Prost-Boucle, Alban Bourge, Frédéric Pétrot. High-Efficiency Convolutional Ternary Neural Networks with Custom Adder Trees and Weight Compression. ACM Transactions on Reconfigurable Technology and Systems (TRETS), 2018, Special Issue on Deep learning on FPGAs, 11 (3), pp.1-24. ⟨10.1145/3294768⟩. ⟨hal-01686718v2⟩

Collections

UGA CNRS TIMA
534 Consultations
1533 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More