Combining Geometric, Textual and Visual Features for Predicting Prepositions in Image Descriptions

Abstract : We investigate the role that geometric, textual and visual features play in the task of predicting a preposition that links two visual entities depicted in an image. The task is an important part of the subsequent process of generating image descriptions. We explore the prediction of prepositions for a pair of entities, both in the case when the labels of such entities are known and unknown. In all situations we found clear evidence that all three features contribute to the prediction task.
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-01375638
Contributor : Emmanuel Dellandrea <>
Submitted on : Monday, October 3, 2016 - 1:12:52 PM
Last modification on : Wednesday, November 20, 2019 - 3:05:54 AM

Identifiers

  • HAL Id : hal-01375638, version 1

Citation

Arnau Ramisa, Josiah Wang, Ying Lu, Emmanuel Dellandréa, Francesc Moreno-Noguer, et al.. Combining Geometric, Textual and Visual Features for Predicting Prepositions in Image Descriptions. Conference on Empirical Methods in Natural Language Processing, Sep 2015, Lisbon, Portugal. pp.214-220. ⟨hal-01375638⟩

Share

Metrics

Record views

329