Hysteretic Q-Learning : an algorithm for decentralized reinforcement learning in cooperative multi-agent teams. - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2007

Hysteretic Q-Learning : an algorithm for decentralized reinforcement learning in cooperative multi-agent teams.

Résumé

Multi-agent systems (MAS) are a field of study of growing interest in a variety of domains such as robotics or distributed controls. The article focuses on decentralized reinforcement learning (RL) in cooperative MAS, where a team of independent learning robot (IL) try to coordinate their individual behavior to reach a coherent joint behavior. We assume that each robot has no information about its teammates'actions. To date, RL approaches for such ILs did not guarantee convergence to the optimal joint policy in scenarios where the coordination is difficult. We report an investigation of existing algorithms for the learning of coordination in cooperative MAS, and suggest a Q-Learning extension for ILs, called Hysteretic Q-Learning. This algorithm does not require any additional communication between robots. Its advantages are showing off and compared to other methods on various applications : bimatrix games, collaborative ball balancing task and pursuit domain.
Fichier principal
Vignette du fichier
iros07_matignon.pdf (437.92 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00187279 , version 1 (14-11-2007)

Identifiants

  • HAL Id : hal-00187279 , version 1

Citer

Laëtitia Matignon, Guillaume J. Laurent, Nadine Le Fort-Piat. Hysteretic Q-Learning : an algorithm for decentralized reinforcement learning in cooperative multi-agent teams.. IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS'07., Oct 2007, San Diego, CA., United States. pp.64-69. ⟨hal-00187279⟩
547 Consultations
3268 Téléchargements

Partager

Gmail Facebook X LinkedIn More