MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2022

MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling

Résumé

Static subword tokenization algorithms have been an essential component of recent works on language modeling. However, their static nature results in important flaws that degrade the models' downstream performance and robustness. In this work, we propose MANTa, a Module for Adaptive Neural TokenizAtion. MANTa is a differentiable tokenizer trained end-to-end with the language model. The resulting system offers a trade-off between the expressiveness of byte-level models and the speed of models trained using subword tokenization. In addition, our tokenizer is highly explainable since it produces an explicit segmentation of sequences into blocks. We evaluate our pretrained model on several English datasets from different domains as well as on synthetic noise. We find that MANTa improves robustness to character perturbations and out-of-domain data. We then show that MANTa performs comparably to other models on the general-domain GLUE benchmark. Finally, we show that it is considerably faster than strictly byte-level models.
Fichier principal
Vignette du fichier
manta.pdf (1.7 Mo) Télécharger le fichier

Dates et versions

hal-03844262 , version 1 (08-11-2022)

Identifiants

  • HAL Id : hal-03844262 , version 1

Citer

Nathan Godey, Roman Castagné, Eric Villemonte de La Clergerie, Benoît Sagot. MANTa: Efficient Gradient-Based Tokenization for Robust End-to-End Language Modeling. EMNLP 2022 - The 2022 Conference on Empirical Methods in Natural Language Processing, Dec 2022, Abu Dhabi, United Arab Emirates. ⟨hal-03844262⟩
73 Consultations
134 Téléchargements

Partager

Gmail Facebook X LinkedIn More