SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set

Abstract : This paper presents an augmentation of MSCOCO dataset where speech is added to image and text. Speech captions are generated using text-to-speech (TTS) synthesis resulting in 616,767 spoken captions (more than 600h) paired with images. Disfluencies and speed perturbation are added to the signal in order to sound more natural. Each speech signal (WAV) is paired with a JSON file containing exact timecode for each word/syllable/phoneme in the spoken caption. Such a corpus could be used for Language and Vision (LaVi) tasks including speech input or output instead of text. Investigating multimodal learning schemes for unsupervised speech pattern discovery is also possible with this corpus, as demonstrated by a preliminary study conducted on a subset of the corpus (10h, 10k spoken captions).
Document type :
Conference papers
Liste complète des métadonnées

Cited literature [20 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01580879
Contributor : Laurent Besacier <>
Submitted on : Sunday, September 3, 2017 - 12:01:39 PM
Last modification on : Thursday, April 4, 2019 - 10:18:05 AM
Document(s) archivé(s) le : Monday, December 11, 2017 - 6:00:16 PM

File

GLU2017.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-01580879, version 1

Collections

Citation

William Havard, Laurent Besacier, Olivier Rosec. SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set. Grounding Language Understanding GLU2017 Workshop (Satellite of Interspeech 2017), Aug 2017, Stockholm, Sweden. ⟨hal-01580879⟩

Share

Metrics

Record views

188

Files downloads

98