Linear Recursive Distributed Representations - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Neural Networks Année : 2005

Linear Recursive Distributed Representations

Thomas Voegtlin
  • Fonction : Auteur
  • PersonId : 830141
Peter Ford Dominey

Résumé

Connectionist networks have been criticized for their inability to represent complex structures with systematicity. That is, while they can be trained to represent and manipulate complex objects made of several constituents, they generally fail to generalize to novel combinations of the same constituents. This paper presents a modification of Pollack's Recursive Auto-Associative Memory (RAAM), that addresses this criticism. The network uses linear units and is trained with Oja's rule, in which it generalizes PCA to tree-structured data. Learned representations may be linearly combined, in order to represent new complex structures. This results in unprecedented generalization capabilities. Capacity is orders of magnitude higher than that of a RAAM trained with back-propagation. Moreover, regularities of the training set are preserved in the new formed objects. The formation of new structures displays developmental effects similar to those observed in children when learning to generalize about the argument structure of verbs.
Fichier principal
Vignette du fichier
dns4.pdf (243.19 Ko) Télécharger le fichier

Dates et versions

inria-00000108 , version 1 (13-06-2005)

Identifiants

  • HAL Id : inria-00000108 , version 1

Citer

Thomas Voegtlin, Peter Ford Dominey. Linear Recursive Distributed Representations. Neural Networks, 2005, 18 (7), pp.878-895. ⟨inria-00000108⟩
146 Consultations
269 Téléchargements

Partager

Gmail Facebook X LinkedIn More