Skip to Main content Skip to Navigation
Conference papers

Decoupled Greedy Learning of CNNs

Abstract : A commonly cited inefficiency of neural network training by back-propagation is the update locking problem: each layer must wait for the signal to propagate through the full network before updating. Several alternatives that can alleviate this issue have been proposed. In this context, we consider a simpler, but more effective, substitute that uses minimal feedback, which we call Decoupled Greedy Learning (DGL). It is based on a greedy relaxation of the joint training objective, recently shown to be effective in the context of Convolutional Neural Networks (CNNs) on large-scale image classification. We consider an optimization of this objective that permits us to decouple the layer training, allowing for layers or modules in networks to be trained with a potentially linear parallelization in layers. With the use of a replay buffer we show this approach can be extended to asynchronous settings, where modules can operate with possibly large communication delays. We show theoretically and empirically that this approach converges. Then, we empirically find that it can lead to better generalization than sequential greedy optimization. We demonstrate the effectiveness of DGL against alternative approaches on the CIFAR-10 dataset and on the large-scale ImageNet dataset.
Document type :
Conference papers
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-02945327
Contributor : Edouard Oyallon <>
Submitted on : Tuesday, September 22, 2020 - 11:07:54 AM
Last modification on : Wednesday, October 14, 2020 - 4:16:38 AM

Links full text

Identifiers

  • HAL Id : hal-02945327, version 1
  • ARXIV : 1901.08164

Citation

Eugene Belilovsky, Michael Eickenberg, Edouard Oyallon. Decoupled Greedy Learning of CNNs. International Conference on Machine Learning, Jul 2020, Vienna (virtual), Austria. pp.5368-5377. ⟨hal-02945327⟩

Share

Metrics

Record views

16