Introduction to vector quantization and its applications for numerics
Résumé
We present an introductory survey to optimal vector quantization and its first applications to Numerical Probability and, to a lesser extent to Information Theory and Data Mining. Both theoretical results on the quantization rate of a random vector taking values in R^d (equipped with the canonical Euclidean norm) and the learning procedures that allow to design optimal quantizers (CLVQ and Lloyd's I procedures) are presented. We also introduce and investigate the more recent notion of {\em greedy quantization} which may be seen as a sequential optimal quantization. A rate optimal result is established. A brief comparison with Quasi-Monte Carlo method is also carried out.
Mots clés
Optimal vector quantization
greedy quantization
quantization tree
Lloyd's I algorithm
Competitive Learning Vector Quantization
stochastic gradient descent
learning algorithms
Zador's Theorem
Feynman-Kac's formula
variational inequality
optimal stopping
quasi-Monte Carlo method
nearest neighbor search
partial distance search
Domaines
Probabilités [math.PR]
Origine : Fichiers produits par l'(les) auteur(s)
Loading...