Skip to Main content Skip to Navigation
Conference papers

Learning Multi-Modal Word Representation Grounded in Visual Context

Abstract : Representing the semantics of words is a long-standing problem for the natural language processing community. Most methods compute word semantics given their textual context in large corpora. More recently, researchers attempted to integrate perceptual and visual features. Most of these works consider the visual appearance of objects to enhance word representations but they ignore the visual environment and context in which objects appear. We propose to unify text-based techniques with vision-based techniques by simultaneously leveraging textual and visual context to learn multimodal word embeddings. We explore various choices for what can serve as a visual context and present an end-to-end method to integrate visual context elements in a multimodal skip-gram model. We provide experiments and extensive analysis of the obtained results.
Complete list of metadata
Contributor : eloi zablocki Connect in order to contact the contributor
Submitted on : Friday, November 10, 2017 - 10:40:06 AM
Last modification on : Wednesday, January 19, 2022 - 2:08:02 PM

Links full text


  • HAL Id : hal-01632414, version 1
  • ARXIV : 1711.03483


Éloi Zablocki, Benjamin Piwowarski, Laure Soulier, Patrick Gallinari. Learning Multi-Modal Word Representation Grounded in Visual Context. Association for the Advancement of Artificial Intelligence (AAAI), Feb 2018, New Orleans, United States. ⟨hal-01632414⟩



Record views