Service interruption on Monday 11 July from 12:30 to 13:00: all the sites of the CCSD (HAL, EpiSciences, SciencesConf, AureHAL) will be inaccessible (network hardware connection).
Skip to Main content Skip to Navigation
Conference papers

Data-Efficient Information Extraction from Documents with Pre-Trained Language Models

Clément Sage 1, 2 Thibault Douzon 1 Alex Aussem 2 Véronique Eglin 1 Haytham Elghazel 2 Stefan Duffner 1 Christophe Garcia 1 Jérémy Espinas 
1 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
2 DM2L - Data Mining and Machine Learning
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Like for many text understanding and generation tasks, pre-trained languages models have emerged as a powerful approach for extracting information from business documents. However, their performance has not been properly studied in data-constrained settings which are often encountered in industrial applications. In this paper, we show that LayoutLM, a pre-trained model recently proposed for encoding 2D documents, reveals a high sample-efficiency when fine-tuned on public and real-world Information Extraction (IE) datasets. Indeed, LayoutLM reaches more than 80% of its full performance with as few as 32 documents for fine-tuning. When compared with a strong baseline learning IE from scratch, the pre-trained model needs between 4 to 30 times fewer annotated documents in the toughest data conditions. Finally, LayoutLM performs better on the real-world dataset when having been beforehand fine-tuned on the full public dataset, thus indicating valuable knowledge transfer abilities. We therefore advocate the use of pre-trained language models for tackling practical extraction problems.
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03267497
Contributor : Clément SAGE Connect in order to contact the contributor
Submitted on : Tuesday, June 22, 2021 - 2:55:34 PM
Last modification on : Monday, March 21, 2022 - 10:30:07 AM
Long-term archiving on: : Thursday, September 23, 2021 - 6:44:28 PM

File

DIL2021_paper.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03267497, version 1

Citation

Clément Sage, Thibault Douzon, Alex Aussem, Véronique Eglin, Haytham Elghazel, et al.. Data-Efficient Information Extraction from Documents with Pre-Trained Language Models. ICDAR 2021 Workshop on Document Images and Language, Sep 2021, Lausanne, Switzerland. ⟨hal-03267497⟩

Share

Metrics

Record views

301

Files downloads

585