Learning Multi-Modal Word Representation Grounded in Visual Context

Éloi Zablocki 1 Benjamin Piwowarski 2 Laure Soulier 1 Patrick Gallinari 1
1 MLIA - Machine Learning and Information Access
LIP6 - Laboratoire d'Informatique de Paris 6
2 BD - Bases de Données
LIP6 - Laboratoire d'Informatique de Paris 6
Abstract : Representing the semantics of words is a long-standing problem for the natural language processing community. Most methods compute word semantics given their textual context in large corpora. More recently, researchers attempted to integrate perceptual and visual features. Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear. We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings. We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model. We provide experiments and extensive analysis of the obtained results.
Type de document :
Communication dans un congrès
Association for the Advancement of Artificial Intelligence (AAAI), Feb 2018, New Orleans, United States. 2018
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01632414
Contributeur : Eloi Zablocki <>
Soumis le : vendredi 10 novembre 2017 - 10:40:06
Dernière modification le : jeudi 11 janvier 2018 - 06:27:19

Lien texte intégral

Identifiants

  • HAL Id : hal-01632414, version 1
  • ARXIV : 1711.03483

Collections

Citation

Éloi Zablocki, Benjamin Piwowarski, Laure Soulier, Patrick Gallinari. Learning Multi-Modal Word Representation Grounded in Visual Context. Association for the Advancement of Artificial Intelligence (AAAI), Feb 2018, New Orleans, United States. 2018. 〈hal-01632414〉

Partager

Métriques

Consultations de la notice

107