Skip to Main content Skip to Navigation
Journal articles

Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos

Sonia Yousfi 1, 2 Sid-Ahmed Berrani 1 Christophe Garcia 2
2 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Unconstrained text recognition in videos is a very challenging task that begins to draw the attention of the OCR community. However, for Arabic video contents, this problem is much less addressed compared at least with Latin script. This work presents our latest contribution to this task, introducing recurrent connectionist language modeling in order to improve Long-Short Term Memory (LSTM) based Arabic text recognition in videos. For a LSTM OCR system that basically yields high recognition rates, introducing proper language models can easily deteriorate results. In this work, we focus on two main factors to reach better improvements. First, we propose to use Recurrent Neural Network (RNN) for language modeling that are able to capture long range linguistic dependencies. We use simple RNN models and models that are learned jointly with a Maximum Entropy language model. Second, for the decoding schema, we are not limited to a n-best rescoring of the OCR hypotheses. Instead, we propose a modified beam search algorithm that uses both OCR and language model probabilities in parallel at each decoding time-step. We introduce a set of hyper-parameters to the algorithm in order to boost recognition results and to control the decoding time. The method is used for Arabic text recognition in TV Broadcast. We conduct an extensive evaluation of the method and study the impact of the language models and the decoding parameters. Results show an improvement of 16% in terms of word recognition rate (WRR) over the baseline that uses only the OCR responses, while keeping a reasonable response time. Moreover, the proposed recurrent connectionist models outperform frequency-based models by more than 4% in terms of WRR. The final recognition schema provides outstanding results that outperform well-known commercial OCR engine by more than 36% in terms of WRR.
Complete list of metadata
Contributor : Christophe Garcia <>
Submitted on : Saturday, December 10, 2016 - 11:48:20 AM
Last modification on : Tuesday, February 2, 2021 - 2:26:02 PM



Sonia Yousfi, Sid-Ahmed Berrani, Christophe Garcia. Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos. Pattern Recognition, Elsevier, 2017, 64 pp. 245-254. ⟨10.1016/j.patcog.2016.11.011⟩. ⟨hal-01413629⟩



Record views