Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Physical Review Letters Année : 2020

Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks

Résumé

Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ~N^(2) pairwise interactions in a RNN with N neurons to embed L manifolds of dimension D << N. We show that the capacity, i.e. the maximal ratio L/N, decreases as |log(epsilon)|^(-D), where epsilon is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
Fichier principal
Vignette du fichier
main.pdf (592.92 Ko) Télécharger le fichier
supplemental.pdf (5.06 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02314069 , version 1 (11-10-2019)
hal-02314069 , version 2 (08-01-2020)

Identifiants

Citer

Aldo Battista, Rémi Monasson. Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks. Physical Review Letters, 2020, 124 (4), ⟨10.1103/PhysRevLett.124.048302⟩. ⟨hal-02314069v2⟩
61 Consultations
75 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More