Does Multimodality Help Human and Machine for Translation and Image Captioning?

Abstract : This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate the usefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Type de document :
Communication dans un congrès
First Conference on Machine Translation, Aug 2016, Berlin, Germany. 2, pp.627-633, 2016, Proceedings of the First Conference on Machine Translation
Liste complète des métadonnées

Littérature citée [27 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-01433183
Contributeur : Sylvain Meignier <>
Soumis le : mercredi 17 janvier 2018 - 12:54:42
Dernière modification le : lundi 22 janvier 2018 - 10:08:23
Document(s) archivé(s) le : lundi 7 mai 2018 - 16:57:19

Fichier

wmt16_multimodal_LIUMCVC.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01433183, version 1

Collections

Citation

Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garcia-Martinez, et al.. Does Multimodality Help Human and Machine for Translation and Image Captioning?. First Conference on Machine Translation, Aug 2016, Berlin, Germany. 2, pp.627-633, 2016, Proceedings of the First Conference on Machine Translation. 〈hal-01433183〉

Partager

Métriques

Consultations de la notice

152

Téléchargements de fichiers

17