Continuous improvement of a document treatment chain using reinforcement learning

Esther Nicart 1, 2 Bruno Zanuttini 2 Bruno Grilhères 1 Patrick Giroux 3, 1
2 Equipe MAD - Laboratoire GREYC - UMR6072
GREYC - Groupe de Recherche en Informatique, Image, Automatique et Instrumentation de Caen
Abstract : We tackle the problem of continuous improvement of a treatment chain which extracts events from open-source documents. We use the human operators' corrections to allow the treatment chain to learn from its errors, and self-improve generally. We apply reinforcement learning (specifically Q-learning) to this problem, where the actions are the services of a treatment chain for the extraction of information. The objective is to use the user feedback to allow the system to learn the ideal configuration of the services (order, gazetteers, and extraction rules) based on the characteristics of the documents treated (language, type, etc.). We carry out the first experiments with automatically generated feedback data, and the results are encouraging.
Complete list of metadatas

Cited literature [10 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01165692
Contributor : Esther Nicart <>
Submitted on : Tuesday, September 8, 2015 - 4:24:42 PM
Last modification on : Friday, April 5, 2019 - 8:21:06 PM
Long-term archiving on: Friday, May 5, 2017 - 1:12:14 PM

File

IC2015v2-1.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01165692, version 2

Citation

Esther Nicart, Bruno Zanuttini, Bruno Grilhères, Patrick Giroux. Continuous improvement of a document treatment chain using reinforcement learning. IC2015, Jun 2015, Rennes, France. ⟨hal-01165692v2⟩

Share

Metrics

Record views

425

Files downloads

137