Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization

Emmanuel Dellandréa
Corentin Abgrall
Gilles Hénaff
  • Fonction : Auteur
  • PersonId : 1102054
Liming Chen

Résumé

Traditional deep learning algorithms often fail to generalize when they are tested outside of the domain of training data. Because data distributions can change dynamically in real-life applications once a learned model is deployed, in this paper we are interested in single-source domain generalization (SDG) which aims to develop deep learning algorithms able to generalize from a single training domain where no information about the test domain is available at training time. Firstly, we design two simple MNISTbased SDG benchmarks, namely MNIST Color SDG-MP and MNIST Color SDG-UP, which highlight the two different fundamental SDG issues of increasing difficulties: 1) a class-correlated pattern in the training domain is missing (SDG-MP), or 2) uncorrelated with the class (SDG-UP), in the testing data domain. This is in sharp contrast with the current domain generalization (DG) benchmarks which mix up different correlation and variation factors and thereby make hard to disentangle success or failure factors when benchmarking DG algorithms. We further evaluate several state-of-the-art SDG algorithms through our simple benchmark, namely MNIST Color SDG-MP, and show that the issue SDG-MP is largely unsolved despite of a decade of efforts in developing DG algorithms. Finally, we also propose a partially reversed contrastive loss to encourage intra-class diversity and find less strongly correlated patterns, to deal with SDG-MP and show that the proposed approach is very effective on our MNIST Color SDG-MP benchmark.
Fichier principal
Vignette du fichier
arow_2021.pdf (485.75 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03260124 , version 1 (14-06-2021)
hal-03260124 , version 2 (13-10-2022)
hal-03260124 , version 3 (31-01-2023)
hal-03260124 , version 4 (23-02-2023)

Identifiants

  • HAL Id : hal-03260124 , version 2

Citer

Thomas Duboudin, Emmanuel Dellandréa, Corentin Abgrall, Gilles Hénaff, Liming Chen. Encouraging Intra-Class Diversity Through a Reverse Contrastive Loss for Better Single-Source Domain Generalization. International Conference on Computer Vision - Workshop on Adversarial Robustness In the Real World, 2021, Virtual, France. ⟨hal-03260124v2⟩
69 Consultations
144 Téléchargements

Partager

Gmail Facebook X LinkedIn More