Bellman equation and viscosity solutions for mean-field stochastic control problem

Abstract : We consider the stochastic optimal control problem of McKean-Vlasov stochastic differential equation where the coefficients may depend upon the joint law of the state and control. By using feedback controls, we reformulate the problem into a deterministic control problem with only the marginal distribution of the process as controlled state variable, and prove that dynamic programming principle holds in its general form. Then, by relying on the notion of differentiability with respect to pro\-bability measures recently introduced by P.L. Lions in [32], and a special Itô formula for flows of probability measures, we derive the (dynamic programming) Bellman equation for mean-field stochastic control problem, and prove a veri\-fication theorem in our McKean-Vlasov framework. We give explicit solutions to the Bellman equation for the linear quadratic mean-field control problem, with applications to the mean-variance portfolio selection and a systemic risk model. We also consider a notion of lifted visc-sity solutions for the Bellman equation, and show the viscosity property and uniqueness of the value function to the McKean-Vlasov control problem. Finally, we consider the case of McKean-Vlasov control problem with open-loop controls and discuss the associated dynamic programming equation that we compare with the case of closed-loop controls.
Type de document :
Pré-publication, Document de travail
to appear in ESAIM: COCV. 2017
Liste complète des métadonnées
Contributeur : Huyen Pham <>
Soumis le : mardi 7 mars 2017 - 14:55:35
Dernière modification le : vendredi 10 mars 2017 - 01:08:43


Fichiers produits par l'(les) auteur(s)


  • HAL Id : hal-01248317, version 3
  • ARXIV : 1512.07866


Huyên Pham, Xiaoli Wei. Bellman equation and viscosity solutions for mean-field stochastic control problem. to appear in ESAIM: COCV. 2017. <hal-01248317v3>



Consultations de
la notice


Téléchargements du document