Dynamic consistency for Stochastic Optimal Control problems

Abstract : For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time step $t_0$, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time step $t_0, t_1, \dots, T$; at the next time step $t_1$, he is able to formulate a new optimization problem starting at time $t_1$ that yields a new sequence of optimal decision rules. This process can be continued until final time $T$ is reached. A family of optimization problems formulated in this way is said to be time consistent if the optimal strategies obtained when solving the original problem remain optimal for all subsequent problems. The notion of time consistency, well-known in the field of Economics, has been recently introduced in the context of risk measures, notably by Artzner et al. (2007) and studied in the Stochastic Programming framework by Shapiro (2009) and for Markov Decision Processes (MDP) by Ruszczynski (2009). We here link this notion with the concept of ''state variable'' in MDP, and show that a significant class of dynamic optimization problems are dynamically consistent, provided that an adequate state variable is chosen.
Document type :
Journal articles
Liste complète des métadonnées

Cited literature [19 references]  Display  Hide  Download

Contributor : Pierre Girardeau <>
Submitted on : Monday, May 17, 2010 - 11:29:23 AM
Last modification on : Tuesday, March 6, 2018 - 3:56:35 PM
Document(s) archivé(s) le : Thursday, September 16, 2010 - 2:34:21 PM


Files produced by the author(s)




Pierre Carpentier, Jean-Philippe Chancelier, Guy Cohen, Michel de Lara, Pierre Girardeau. Dynamic consistency for Stochastic Optimal Control problems. Annals of Operations Research, Springer Verlag, 2011, 17 p. ⟨10.1007/s10479-011-1027-8⟩. ⟨hal-00483811⟩



Record views


Files downloads