Data-Efficient Information Extraction from Documents with Pre-Trained Language Models - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Data-Efficient Information Extraction from Documents with Pre-Trained Language Models

Résumé

Like for many text understanding and generation tasks, pre-trained languages models have emerged as a powerful approach for extracting information from business documents. However, their performance has not been properly studied in data-constrained settings which are often encountered in industrial applications. In this paper, we show that LayoutLM, a pre-trained model recently proposed for encoding 2D documents, reveals a high sample-efficiency when fine-tuned on public and real-world Information Extraction (IE) datasets. Indeed, LayoutLM reaches more than 80% of its full performance with as few as 32 documents for fine-tuning. When compared with a strong baseline learning IE from scratch, the pre-trained model needs between 4 to 30 times fewer annotated documents in the toughest data conditions. Finally, LayoutLM performs better on the real-world dataset when having been beforehand fine-tuned on the full public dataset, thus indicating valuable knowledge transfer abilities. We therefore advocate the use of pre-trained language models for tackling practical extraction problems.
Fichier principal
Vignette du fichier
DIL2021_paper.pdf (2.14 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03267497 , version 1 (22-06-2021)

Identifiants

Citer

Clément Sage, Thibault Douzon, Alex Aussem, Véronique Eglin, Haytham Elghazel, et al.. Data-Efficient Information Extraction from Documents with Pre-Trained Language Models. ICDAR 2021 Workshop on Document Images and Language, Sep 2021, Lausanne, Switzerland. ⟨10.1007/978-3-030-86159-9_33⟩. ⟨hal-03267497⟩
354 Consultations
1037 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More