Visuo-Motor Control Using Body Representation of a Robotic Arm with Gated Auto-Encoders
Résumé
We present an auto-encoder version of gated networks for learning visuomotor transformations for reaching targets and representating the location of the robot arm. Gated networks use multiplicative neurons to bind correlated images from each others and to learn their relative changes. Using the encoder network, motor neurons categorize the induced visual displacements of the robot arm when applying their corresponding motor commands. Using the decoder network, it is possible to infer back the visual motion and location of the robot arm from the activity of the motor units, aka body image. Using both networks as the same time, near targets can simulate a fictious visual displacement of the robot arm and induce the activation of the most probable motor command for tracking it. Results show the effectiveness of our approach for 2 d-of and 3 do -f robot arms. We discuss then about the network and its use for reaching task and body representation, future works and its relevance for modeling the so-called gain-field neurons in the parieto-motor cortices for learning visuomotor transformation.
Domaines
Robotique [cs.RO]
Origine : Fichiers produits par l'(les) auteur(s)
Loading...