Learning Word Embeddings: Unsupervised Methods for Fixed-size Representations of Variable-length Speech Segments

Abstract : Fixed-length embeddings of words are very useful for a variety of tasks in speech and language processing. Here we systematically explore two methods of computing fixed-length embeddings for variable-length sequences. We evaluate their susceptibility to phonetic and speaker-specific variability on English, a high resource language and Xitsonga, a low resource language, using two evaluation metrics: ABX word discrimination and ROC-AUC on same-different phoneme n-grams. We show that a simple downsampling method supplemented with length information can outperform the variable-length input feature representation on both evaluations. Recurrent autoencoders, trained without supervision, can yield even better results at the expense of increased computational complexity.
Document type :
Conference papers
Complete list of metadatas

Cited literature [17 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01888708
Contributor : Emmanuel Dupoux <>
Submitted on : Friday, December 7, 2018 - 2:36:29 PM
Last modification on : Thursday, January 3, 2019 - 3:11:52 PM
Long-term archiving on : Friday, March 8, 2019 - 2:50:24 PM

File

Holzenberger_DKRD_2018_fixed_l...
Files produced by the author(s)

Identifiers

Collections

Citation

Nils Holzenberger, Mingxing Du, Julien Karadayi, Rachid Riad, Emmanuel Dupoux. Learning Word Embeddings: Unsupervised Methods for Fixed-size Representations of Variable-length Speech Segments. Interspeech 2018, Sep 2018, Hyderabad, India. ⟨10.21437/Interspeech.2018-2364⟩. ⟨hal-01888708⟩

Share

Metrics

Record views

94

Files downloads

100