Quantificational features in distributional word representations - Archive ouverte HAL Access content directly
Conference Papers Year : 2016

Quantificational features in distributional word representations

Abstract

Do distributional word representations encode the linguistic regularities that theories of meaning argue they should encode? We address this question in the case of the logical properties (monotonicity, force) of quantificational words such as everything (in the object domain) and always (in the time domain). Using the vector offset approach to solving word analogies, we find that the skip-gram model of distributional semantics behaves in a way that is remarkably consistent with encoding these features in some domains, with accuracy approaching 100%, especially with mediumsized context windows. Accuracy in others domains was less impressive. We compare the performance of the model to the behavior of human participants, and find that humans performed well even where the models struggled.
Fichier principal
Vignette du fichier
Linzen_dupoux_spector_2016.pdf (233.19 Ko) Télécharger le fichier
Origin : Publisher files allowed on an open archive

Dates and versions

hal-03877036 , version 1 (29-11-2022)

Identifiers

Cite

Tal Linzen, Emmanuel Dupoux, Benjamin Spector. Quantificational features in distributional word representations. Fifth Joint Conference on Lexical and Computational Semantics, Aug 2016, Berlin, Germany. ⟨10.18653/v1/S16-2001⟩. ⟨hal-03877036⟩
15 View
12 Download

Altmetric

Share

Gmail Facebook X LinkedIn More