Skip to Main content Skip to Navigation
Journal articles

Negative Sampling Strategies for Contrastive Self-Supervised Learning of Graph Representations

Abstract : Contrastive learning has become a successful approach for learning powerful text and image representations in a self-supervised manner. Contrastive frameworks learn to distinguish between representations coming from augmentations of the same data point (positive pairs) and those of other (negative) examples. Recent studies aim at extending methods from contrastive learning to graph data. In this work, we propose a general framework for learning node representations in a self supervised manner called Graph Constrastive Learning (GraphCL). It learns node embeddings by maximizing the similarity between the nodes representations of two randomly perturbed versions of the same graph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. We investigate different standard and new negative sampling strategies as well as a comparison without negative sampling approach. We demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks in both transductive and inductive learning setups.
Complete list of metadata
Contributor : Philippe CIBLAT Connect in order to contact the contributor
Submitted on : Tuesday, February 15, 2022 - 4:01:36 PM
Last modification on : Tuesday, March 8, 2022 - 7:33:41 PM
Long-term archiving on: : Monday, May 16, 2022 - 8:23:05 PM


Files produced by the author(s)


  • HAL Id : hal-03575619, version 1


Hakim Hafidi, Mounir Ghogho, Philippe Ciblat, Ananthram Swami. Negative Sampling Strategies for Contrastive Self-Supervised Learning of Graph Representations. Signal Processing, Elsevier, 2022, 190 (4). ⟨hal-03575619⟩



Record views


Files downloads