Self-refreshing memory in artificial neural networks: Learning temporal sequences without catastrophic forgetting - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Connection Science Année : 2004

Self-refreshing memory in artificial neural networks: Learning temporal sequences without catastrophic forgetting

Résumé

While humans forget gradually, highly distributed connectionist networks forget catastrophically: newly learned information often completely erases previously learned information. This is not just implausible cognitively, but disastrous practically. However, it is not easy in connectionist cognitive modelling to keep away from highly distributed neural networks, if only because of their ability to generalize. A realistic and effective system that solves the problem of catastrophic interference in sequential learning of ‘static' (i.e. non-temporally ordered) patterns has been proposed recently (Robins 1995, Connection Science, 7: 123–146, 1996, Connection Science, 8: 259–275, Ans and Rousset 1997, CR Académie des Sciences Paris, Life Sciences, 320: 989–997, French 1997, Connection Science, 9: 353–379, 1999, Trends in Cognitive Sciences, 3: 128–135, Ans and Rousset 2000, Connection Science, 12: 1–19). The basic principle is to learn new external patterns interleaved with internally generated ‘pseudopatterns' (generated from random activation) that reflect the previously learned information. However, to be credible, this self-refreshing mechanism for static learning has to encompass our human ability to learn serially many temporal sequences of patterns without catastrophic forgetting. Temporal sequence learning is arguably more important than static pattern learning in the real world. In this paper, we develop a dual-network architecture in which self-generated pseudopatterns reflect (non-temporally) all the sequences of temporally ordered items previously learned. Using these pseudopatterns, several self-refreshing mechanisms that eliminate catastrophic forgetting in sequence learning are described and their efficiency is demonstrated through simulations. Finally, an experiment is presented that evidences a close similarity between human and simulated behaviour.

Dates et versions

hal-00170922 , version 1 (10-09-2007)

Identifiants

Citer

Bernard Ans, Stéphane Rousset, Robert French, Serban Musca. Self-refreshing memory in artificial neural networks: Learning temporal sequences without catastrophic forgetting. Connection Science, 2004, 16 (2), pp.71-99. ⟨10.1080/09540090412331271199⟩. ⟨hal-00170922⟩
148 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More