Enhancing Translation Language Models with Word Embedding for Information Retrieval

Jibril Frej 1, 2, 3, 4 Jean-Pierre Chevallet 1, 2, 4 Didier Schwab 1, 2, 3
Abstract : In this paper, we explore the usage of Word Embedding semantic resources for Information Retrieval (IR) task. This embedding, produced by a shallow neural network, have been shown to catch semantic similarities between words (Mikolov et al., 2013). Hence, our goal is to enhance IR Language Models by addressing the term mismatch problem. To do so, we applied the model presented in the paper Integrating and Evaluating Neural Word Embedding in Information Retrieval by Zuccon et al. (2015) that proposes to estimate the translation probability of a Translation Language Model using the cosine similarity between Word Embedding. The results we obtained so far did not show a statistically significant improvement compared to classical Language Model.
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [12 references]  Display  Hide  Download

Contributor : Jibril Frej <>
Submitted on : Thursday, January 11, 2018 - 3:14:23 PM
Last modification on : Tuesday, February 12, 2019 - 1:31:18 AM
Document(s) archivé(s) le : Thursday, May 3, 2018 - 4:05:35 PM


Enhancing Translation Language...
Files produced by the author(s)


  • HAL Id : hal-01681311, version 1
  • ARXIV : 1801.03844



Jibril Frej, Jean-Pierre Chevallet, Didier Schwab. Enhancing Translation Language Models with Word Embedding for Information Retrieval. 9ème Atelier Recherche d'Information SEmantique, Jul 2017, Caen, France. 2017. 〈hal-01681311〉



Record views


Files downloads