Large Deviation Principle for invariant distributions of Memory Gradient Diffusions

Abstract : In this paper, we consider a class of diffusion processes based on a memory gradient descent, i.e. whose drift term is built as the average all along the past of the trajectory of the gradient of a coercive function U . Under some classical assumptions on U , this type of diffusion is ergodic and admits a unique invariant distribution. In view to optimization applications, we want to understand the behaviour of the invariant distribution when the diffusion coefficient goes to 0. In the non-memory case, the invariant distribution is explicit and the so-called Laplace method shows that a Large Deviation Principle (LDP) holds with an explicit rate function, that leads to a concentration of the invariant distribution around the global minima of U . Here, except in the linear case, we have no closed formula for the invariant distribution but we show that a LDP can still be obtained. Then, in the one- dimensional case, we get some bounds for the rate function that lead to the concentration around the global minimum under some assumptions on the second derivative of U .
Document type :
Preprints, Working Papers, ...
2012


https://hal-univ-tlse3.archives-ouvertes.fr/hal-00759188
Contributor : Sébastien Gadat <>
Submitted on : Friday, November 30, 2012 - 2:01:42 PM
Last modification on : Friday, November 30, 2012 - 3:48:59 PM

File

GPP_EJP_revision_Long_26_11_fa...
fileSource_public_author

Identifiers

  • HAL Id : hal-00759188, version 1

Collections

Citation

Sébastien Gadat, Fabien Panloup, Clément Pellegrini. Large Deviation Principle for invariant distributions of Memory Gradient Diffusions. 2012. <hal-00759188>

Export

Share

Metrics

Consultation de
la notice

159

Téléchargement du document

23