Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2020

Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning

Résumé

Zero-shot learning aims to recognize instances of unseen classes, for which no visual instance is available during training, by learning mul-timodal relations between samples from seen classes and corresponding class semantic representations. These class representations usually consist of either attributes, which do not scale well to large datasets, or word embeddings, which lead to poorer performance. A good trade-off could be to employ short sentences in natural language as class descriptions. We explore different solutions to use such short descriptions in a ZSL setting and show that while simple methods cannot achieve very good results with sentences alone, a combination of usual word embeddings and sentences can significantly outperform current state-of-the-art 3 .
Fichier principal
Vignette du fichier
2010.02959.pdf (818.83 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03003689 , version 1 (13-11-2020)

Identifiants

  • HAL Id : hal-03003689 , version 1

Citer

Yannick Le Cacheux, Hervé Le Borgne, Michel Crucianu. Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning. ECCV 2020 workshop Transferring and adapting source knowledge in computer vision (TASK-CV), Aug 2020, Glasgow, United Kingdom. ⟨hal-03003689⟩
66 Consultations
55 Téléchargements

Partager

Gmail Facebook X LinkedIn More