Skip to Main content Skip to Navigation
Journal articles

Low-complexity Approximate Convolutional Neural Networks

Abstract : In this paper, we present an approach for minimizing the computational complexity of trained ConvolutionalNeural Networks (ConvNet). The idea is to approximate allelements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero-one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication- free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low-power, efficient hardware designs. We applied our approach on three use cases of different complexity: (i) a “light” but efficient ConvNet for face detection (with around 1 000 parameters); (ii) another one for hand-written digit classification (with more than 180 000 parameters); and (iii) a significantly larger ConvNet: AlexNet with ≈1.2 million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01727219
Contributor : Christophe Garcia <>
Submitted on : Friday, March 9, 2018 - 12:48:56 AM
Last modification on : Thursday, November 21, 2019 - 2:05:40 AM

Identifiers

Citation

Renato Cintra, Stefan Duffner, Christophe Garcia, André Leite. Low-complexity Approximate Convolutional Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, IEEE, 2018, ⟨10.1109/TNNLS.2018.2815435⟩. ⟨hal-01727219⟩

Share

Metrics

Record views

461