Generalization in Mean Field Games by Learning Master Policies - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2021

Generalization in Mean Field Games by Learning Master Policies

Sarah Perrin
  • Fonction : Auteur
Mathieu Laurière
  • Fonction : Auteur
Julien Pérolat
  • Fonction : Auteur
Romuald Élie
  • Fonction : Auteur
Matthieu Geist
  • Fonction : Auteur
  • PersonId : 790158
  • IdRef : 142341819
Olivier Pietquin
  • Fonction : Auteur

Résumé

Mean Field Games (MFGs) can potentially scale multi-agent systems to extremely large populations of agents. Yet, most of the literature assumes a single initial distribution for the agents, which limits the practical applications of MFGs. Machine Learning has the potential to solve a wider diversity of MFG problems thanks to generalizations capacities. We study how to leverage these generalization properties to learn policies enabling a typical agent to behave optimally against any population distribution. In reference to the Master equation in MFGs, we coin the term ``Master policies'' to describe them and we prove that a single Master policy provides a Nash equilibrium, whatever the initial distribution. We propose a method to learn such Master policies. Our approach relies on three ingredients: adding the current population distribution as part of the observation, approximating Master policies with neural networks, and training via Reinforcement Learning and Fictitious Play. We illustrate on numerical examples not only the efficiency of the learned Master policy but also its generalization capabilities beyond the distributions used for training.

Dates et versions

hal-03416251 , version 1 (05-11-2021)

Identifiants

Citer

Sarah Perrin, Mathieu Laurière, Julien Pérolat, Romuald Élie, Matthieu Geist, et al.. Generalization in Mean Field Games by Learning Master Policies. 2021. ⟨hal-03416251⟩
24 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More