Distributed Value Functions for Multi-Robot Exploration: a Position Paper - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2012

Distributed Value Functions for Multi-Robot Exploration: a Position Paper

Résumé

This paper addresses the problem of exploring an unknown area with a team of autonomous robots using decentralized decision making techniques. The localization aspect is not considered and it is assumed the robots share their positions and have access to a map updated with all explored areas. A key problem is then the coordination of decentralized decision processes: each individual robot must choose appropriate exploration goals so that the team simultaneously explores different locations of the environment. We formalize this problem as a Decentralized Markov Decision Process (Dec- MDP) solved as a set of individual MDPs, where interactions between MDPs are considered in a distributed value function. Thus each robot computes locally a strategy that minimizes the interactions between the robots and maximises the space coverage of the team. Our technique has been implemented and evaluated in real-world and simulated experiments.
Fichier principal
Vignette du fichier
acti-matignon-2012-3.pdf (2.11 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00966784 , version 1 (27-03-2014)

Identifiants

  • HAL Id : hal-00966784 , version 1

Citer

Laëtitia Matignon, Laurent Jeanpierre, Abdel-Illah Mouaddib. Distributed Value Functions for Multi-Robot Exploration: a Position Paper. Multi-Agent Sequential Decision Making in Uncertain Multi-Agent Domain (MSDM) (workshop of AAMAS), 2012, France. ⟨hal-00966784⟩
197 Consultations
90 Téléchargements

Partager

Gmail Facebook X LinkedIn More