Learning from DPPs via Sampling: Beyond HKPV and symmetry - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Learning from DPPs via Sampling: Beyond HKPV and symmetry

Résumé

Determinantal point processes (DPPs) have become a significant tool for recommendation systems, feature selection, or summary extraction, harnessing the intrinsic ability of these probabilistic models to facilitate sample diversity. The ability to sample from DPPs is paramount to the empirical investigation of these models. Most exact samplers are variants of a spectral meta-algorithm due to Hough, Krishnapur, Peres and Vir\'ag (henceforth HKPV), which is in general time and resource intensive. For DPPs with symmetric kernels, scalable HKPV samplers have been proposed that either first downsample the ground set of items, or force the kernel to be low-rank, using e.g. Nystr\"om-type decompositions. In the present work, we contribute a radically different approach than HKPV. Exploiting the fact that many statistical and learning objectives can be effectively accomplished by only sampling certain key observables of a DPP (so-called linear statistics), we invoke an expression for the Laplace transform of such an observable as a single determinant, which holds in complete generality. Combining traditional low-rank approximation techniques with Laplace inversion algorithms from numerical analysis, we show how to directly approximate the distribution function of a linear statistic of a DPP. This distribution function can then be used in hypothesis testing or to actually sample the linear statistic, as per requirement. Our approach is scalable and applies to very general DPPs, beyond traditional symmetric kernels.

Dates et versions

hal-02902813 , version 1 (20-07-2020)

Identifiants

Citer

R. Bardenet, Subhroshekhar Ghosh. Learning from DPPs via Sampling: Beyond HKPV and symmetry. 2020. ⟨hal-02902813⟩
65 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More