Sampled Gromov Wasserstein - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Machine Learning Année : 2021

Sampled Gromov Wasserstein

Tanguy Kerdoncuff
  • Fonction : Auteur
  • PersonId : 1099447
Rémi Emonet
Marc Sebban

Résumé

Optimal Transport (OT) has proven to be a powerful tool to compare probability distributions in machine learning, but dealing with probability measures lying in different spaces remains an open problem. To address this issue, the Gromov Wasserstein distance (GW) only considers intra-distribution pairwise (dis)similarities. However, for two (discrete) distributions with N points, the state of the art solvers have an iterative O(N^4) complexity when using an arbitrary loss function, making most of the real world problems intractable. In this paper, we introduce a new iterative way to approximate GW, called Sampled Gromov Wasserstein, which uses the current estimate of the transport plan to guide the sampling of cost matrices. This simple idea, supported by theoretical convergence guarantees, comes with a O(N^2) solver. A special case of Sampled Gromov Wasserstein, which can be seen as the natural extension of the well known Sliced Wasserstein to distributions lying in different spaces, reduces even further the complexity to O (N log(N)). Our contributions are supported by experiments on synthetic and real datasets.
Fichier principal
Vignette du fichier
Sampled_Gromov_Wasserstein_2.pdf (19.12 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03232509 , version 1 (21-05-2021)
hal-03232509 , version 2 (14-09-2021)

Identifiants

Citer

Tanguy Kerdoncuff, Rémi Emonet, Marc Sebban. Sampled Gromov Wasserstein. Machine Learning, 2021, ⟨10.1007/s10994-021-06035-1⟩. ⟨hal-03232509v2⟩
197 Consultations
157 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More