Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue Pattern Recognition Année : 2017

Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos

Résumé

Unconstrained text recognition in videos is a very challenging task that begins to draw the attention of the OCR community. However, for Arabic video contents, this problem is much less addressed compared at least with Latin script. This work presents our latest contribution to this task, introducing recurrent connectionist language modeling in order to improve Long-Short Term Memory (LSTM) based Arabic text recognition in videos. For a LSTM OCR system that basically yields high recognition rates, introducing proper language models can easily deteriorate results. In this work, we focus on two main factors to reach better improvements. First, we propose to use Recurrent Neural Network (RNN) for language modeling that are able to capture long range linguistic dependencies. We use simple RNN models and models that are learned jointly with a Maximum Entropy language model. Second, for the decoding schema, we are not limited to a n-best rescoring of the OCR hypotheses. Instead, we propose a modified beam search algorithm that uses both OCR and language model probabilities in parallel at each decoding time-step. We introduce a set of hyper-parameters to the algorithm in order to boost recognition results and to control the decoding time. The method is used for Arabic text recognition in TV Broadcast. We conduct an extensive evaluation of the method and study the impact of the language models and the decoding parameters. Results show an improvement of 16% in terms of word recognition rate (WRR) over the baseline that uses only the OCR responses, while keeping a reasonable response time. Moreover, the proposed recurrent connectionist models outperform frequency-based models by more than 4% in terms of WRR. The final recognition schema provides outstanding results that outperform well-known commercial OCR engine by more than 36% in terms of WRR.
Fichier non déposé

Dates et versions

hal-01413629 , version 1 (10-12-2016)

Identifiants

Citer

Sonia Yousfi, Sid-Ahmed Berrani, Christophe Garcia. Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos. Pattern Recognition, 2017, 64 pp. 245-254. ⟨10.1016/j.patcog.2016.11.011⟩. ⟨hal-01413629⟩
396 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More