Does Multimodality Help Human and Machine for Translation and Image Captioning? - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2016

Does Multimodality Help Human and Machine for Translation and Image Captioning?

Résumé

This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Fichier principal
Vignette du fichier
wmt16_multimodal_LIUMCVC.pdf (356.26 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01433183 , version 1 (17-01-2018)

Identifiants

  • HAL Id : hal-01433183 , version 1

Citer

Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garcia-Martinez, et al.. Does Multimodality Help Human and Machine for Translation and Image Captioning?. First Conference on Machine Translation, Aug 2016, Berlin, Germany. pp.627-633. ⟨hal-01433183⟩
210 Consultations
55 Téléchargements

Partager

Gmail Facebook X LinkedIn More