Coordinated Multi-Robot Exploration Under Communication Constraints Using Decentralized Markov Decision Processes
Résumé
Recent works on multi-agent sequential decision mak- ing using decentralized partially observable Markov de- cision processes have been concerned with interaction- oriented resolution techniques and provide promising results. These techniques take advantage of local inter- actions and coordination. In this paper, we propose an approach based on an interaction-oriented resolution of decentralized decision makers. To this end, distributed value functions (DVF) have been used by decoupling the multi-agent problem into a set of individual agent problems. However existing DVF techniques assume permanent and free communication between the agents. In this paper, we extend the DVF methodology to ad- dress full local observability, limited share of informa- tion and communication breaks. We apply our new DVF in a real-world application consisting of multi-robot ex- ploration where each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team even under communication constraints. Our technique has been im- plemented and evaluated in simulation and in real-world scenarios during a robotic challenge for the exploration and mapping of an unknown environment. Experimen- tal results from real-world scenarios and from the chal- lenge are given where our system was vice-champion.
Origine : Accord explicite pour ce dépôt