A meta-learning approach to anchor visual percepts - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Robotics and Autonomous Systems Année : 2003

A meta-learning approach to anchor visual percepts

Nicolas Bredèche
Yann Chevaleyre
Gérard Sabah
  • Fonction : Auteur

Résumé

There is a growing interest in both the robotics and AI communities to give autonomous robots the ability to interact with humans. To efficiently identify properties from its environment (be it the presence of a human, or a fire extinguisher or another robot of its kind) is one of the critical tasks for supporting meaningful robot/human dialogues. This task is a particular anchoring task. Our goal is to endow autonomous mobile robots (in our experiments a Pioneer 2DX) with a perceptual system that can efficiently adapt itself to the context so as to enable the learning task required to physically ground symbols. In effect, Machine Learning based approaches to provide robots with an ability to ground symbols heavily rely on ad hoc perceptual representation provided by AI designers. Our approach is in the line of meta-learning algorithms, that iteratively change representations so as to discover one that is well-fitted for the task. The architecture we propose is based on a widely used approach in constructive induction: the Wrapper-model. Experiments using the PLIC system to have a robot identify the presence of humans and fire extinguishers show the interest of such an approach that dynamically abstracts a well-fitted image description depending on the concept to learn.

Dates et versions

hal-01176922 , version 1 (16-07-2015)

Identifiants

Citer

Nicolas Bredèche, Yann Chevaleyre, Jean-Daniel Zucker, Alexis Drogoul, Gérard Sabah. A meta-learning approach to anchor visual percepts. Robotics and Autonomous Systems, 2003, 43 (2-3), pp.149-162. ⟨10.1016/S0921-8890(02)00356-1⟩. ⟨hal-01176922⟩
49 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More