Confronting machine-learning with neuroscience for neuromorphic architectures design
Résumé
Artificial neural networks are experiencing today an unprecedented interest thanks to two main changes: the explosion of open data that is necessary for their training, and the increasing computing power of today's computers that makes the training part possible in a reasonable time. The recent results of deep neural networks on image classification has given neural networks the leading role in machine learning algorithms and artificial intelligence research.
However, most applications such as smart devices or autonomous vehicles require an embedded implementation of neural networks. Their implementation in CPU/GPU remains too expensive, mostly in energy consumption, due to the non-adaptation of the hardware to the computation model, which becomes a limit to their use. It is therefore necessary to design neuromorphic architectures, i.e. hardware accelerators that fit to the parallel and distributed computation paradigm of neural networks for reducing their hardware cost implementation. We mainly focus on the optimization of energy consumption to enable integration in embedded systems.
For this purpose, we implement two models of artificial neural networks coming from two different scientific domains: the multi-layer perceptron derived from machine learning and the spiking neural network inspired from neuroscience. We compare the performances of both approaches in terms of accuracy and hardware cost to find out the most attractive architecture for the design of embedded artificial intelligence.
Mots clés
power consumption
hardware accelerator
embedded systems
neuromorphic architectures
artificial neural networks
learning (artificial intelligence)
graphics processing units
multilayer perceptrons
neural nets
power aware computing
embedded artificial intelligence
neuromorphic architectures design
deep neural networks
training part
machine learning algorithms
artificial intelligence research
embedded implementation
energy consumption
computation model
design neuromorphic architectures
distributed computation paradigm
hardware cost implementation
spiking neural network
machine-learning
computing power
CPU-GPU
Neurons
Biological neural networks
Hardware
Biological system modeling
Computer architecture
Computational modeling
Brain modeling