MODELS OF VISUALLY GROUNDED SPEECH SIGNAL PAY ATTENTION TO NOUNS: A BILINGUAL EXPERIMENT ON ENGLISH AND JAPANESE

Abstract : We investigate the behaviour of attention in neural models of visually grounded speech trained on two languages: English and Japanese. Experimental results show that attention focuses on nouns and this behaviour holds true for two very typologically different languages. We also draw parallels between artificial neural attention and human attention and show that neural attention focuses on word endings as it has been theorised for human attention. Finally, we investigate how two visually grounded monolingual models can be used to perform cross-lingual speech-to-speech retrieval. For both languages, the enriched bilingual (speech-image) corpora with part-of-speech tags and forced alignments are distributed to the community for reproducible research. Index Terms-grounded language learning, attention mechanism , cross-lingual speech retrieval, recurrent neural networks.
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02013984
Contributor : William Havard <>
Submitted on : Monday, February 11, 2019 - 1:01:03 PM
Last modification on : Tuesday, July 30, 2019 - 3:42:14 PM
Long-term archiving on : Sunday, May 12, 2019 - 1:57:55 PM

File

1902.03052.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02013984, version 1

Citation

William Havard, Jean-Pierre Chevrot, Laurent Besacier. MODELS OF VISUALLY GROUNDED SPEECH SIGNAL PAY ATTENTION TO NOUNS: A BILINGUAL EXPERIMENT ON ENGLISH AND JAPANESE. 2019. ⟨hal-02013984⟩

Share

Metrics

Record views

68

Files downloads

362