Distributed Value Functions for the Coordination of Decentralized Decision Makers
Résumé
In this paper, we propose an approach based on an interaction-oriented resolution of decentralized Markov decision processes (Dec-MDPs) primary motivated by a real-world application of decentralized decision makers to explore and map an unknown environment. This interaction-oriented resolution is based on distributed value functions (DVF) techniques that decouple the multi-agent problem into a set of individual agent problems and consider possible interactions among agents as a separate layer. This leads to a significant reduction of the computational complexity by solving Dec-MDPs as a collection of MDPs. Using this model in multi-robot exploration scenarios, we show that each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team. Our technique has been implemented and evaluated in simulation and in real-world scenarios during a robotic challenge for the exploration and mapping of an unknown environment by mobile robots. Experimental results from real-world scenarios and from the challenge are given where our system was vice-champion.
Fichier principal
DistributedValueFunctionsForTheCoordinationOfDecentralizedDecisionMakers_AAMAS2012.pdf (999.2 Ko)
Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)