Fast and Memory Optimal Low-Rank Matrix Approximation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2015

Fast and Memory Optimal Low-Rank Matrix Approximation

Résumé

In this paper, we revisit the problem of constructing a near-optimal rank k approximation of a matrix M ∈ [0, 1] m×n under the streaming data model where the columns of M are revealed sequentially. We present SLA (Streaming Low-rank Approximation), an algorithm that is asymptotically accurate, when ks k+1 (M) = o(√ mn) where s k+1 (M) is the (k + 1)-th largest singular value of M. This means that its average mean-square error converges to 0 as m and n grow large (i.e., ˆ M (k) −M (k) 2 F = o(mn) with high probability, wherê M (k) and M (k) denote the output of SLA and the optimal rank k approximation of M , respectively). Our algorithm makes one pass on the data if the columns of M are revealed in a random order, and two passes if the columns of M arrive in an arbitrary order. To reduce its memory footprint and complexity, SLA uses random sparsification, and samples each entry of M with a small probability δ. In turn, SLA is memory optimal as its required memory space scales as k(m+n), the dimension of its output. Furthermore, SLA is computationally efficient as it runs in O(δkmn) time (a constant number of operations is made for each observed entry of M), which can be as small as O(k log(m) 4 n) for an appropriate choice of δ and if n ≥ m.
Fichier principal
Vignette du fichier
5929-fast-and-memory-optimal-low-rank-matrix-approximation.pdf (249.38 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01254913 , version 1 (12-01-2016)

Identifiants

  • HAL Id : hal-01254913 , version 1

Citer

Yun Se-Young, Marc Lelarge, Alexandre Proutière. Fast and Memory Optimal Low-Rank Matrix Approximation. NIPS 2015, Dec 2015, Montreal, Canada. ⟨hal-01254913⟩
419 Consultations
294 Téléchargements

Partager

Gmail Facebook X LinkedIn More