Exact converging bounds for Stochastic Dual Dynamic Programming via Fenchel duality

Abstract : The Stochastic Dual Dynamic Programming (SDDP) algorithm has become one of the main tools to address convex multistage stochastic optimal control problem. Recently a large amount of work has been devoted to improve the convergence speed of the algorithm through cut-selection and regularization, or to extend the field of applications to non-linear, integer or risk-averse problems. However one of the main downside of the algorithm remains the difficulty to give an upper bound of the optimal value, usually estimated through Monte Carlo methods and therefore difficult to use in the algorithm stopping criterion. In this paper we present a dual SDDP algorithm that yields a converging exact upper bound for the optimal value of the optimization problem. Incidently we show how to compute an alternative control policy based on an inner approximation of Bellman value functions instead of the outer approximation given by the standard SDDP algorithm. We illustrate the approach on an energy production problem involving zones of production and transportation links between the zones. The numerical experiments we carry out on this example show the effectiveness of the method.
Type de document :
Pré-publication, Document de travail
2018
Liste complète des métadonnées

Littérature citée [20 références]  Voir  Masquer  Télécharger

https://hal-enpc.archives-ouvertes.fr/hal-01744035
Contributeur : Vincent Leclère <>
Soumis le : mercredi 18 avril 2018 - 09:43:07
Dernière modification le : lundi 19 novembre 2018 - 14:28:45

Fichier

optim-online1.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01744035, version 2

Citation

Vincent Leclère, Pierre Carpentier, Jean-Philippe Chancelier, Arnaud Lenoir, François Pacaud. Exact converging bounds for Stochastic Dual Dynamic Programming via Fenchel duality. 2018. 〈hal-01744035v2〉

Partager

Métriques

Consultations de la notice

167

Téléchargements de fichiers

153