Skip to Main content Skip to Navigation
Conference papers

Using Whole Document Context in Neural Machine Translation

Abstract : In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence.
Complete list of metadata

Cited literature [24 references]  Display  Hide  Download
Contributor : Christophe Servan Connect in order to contact the contributor
Submitted on : Tuesday, October 15, 2019 - 11:47:46 AM
Last modification on : Thursday, November 25, 2021 - 3:12:05 PM
Long-term archiving on: : Friday, January 17, 2020 - 11:54:46 AM


Files produced by the author(s)


  • HAL Id : hal-02316397, version 1



Valentin Macé, Christophe Servan. Using Whole Document Context in Neural Machine Translation. 16th International Workshop on Spoken Language Translation 2019, Nov 2019, Hong-Kong, China. ⟨hal-02316397⟩



Record views


Files downloads