Skip to Main content Skip to Navigation
Conference papers

SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set

Abstract : This paper presents an augmentation of MSCOCO dataset where speech is added to image and text. Speech captions are generated using text-to-speech (TTS) synthesis resulting in 616,767 spoken captions (more than 600h) paired with images. Disfluencies and speed perturbation are added to the signal in order to sound more natural. Each speech signal (WAV) is paired with a JSON file containing exact timecode for each word/syllable/phoneme in the spoken caption. Such a corpus could be used for Language and Vision (LaVi) tasks including speech input or output instead of text. Investigating multimodal learning schemes for unsupervised speech pattern discovery is also possible with this corpus, as demonstrated by a preliminary study conducted on a subset of the corpus (10h, 10k spoken captions).
Document type :
Conference papers
Complete list of metadata

Cited literature [20 references]  Display  Hide  Download
Contributor : Laurent Besacier Connect in order to contact the contributor
Submitted on : Sunday, September 3, 2017 - 12:01:39 PM
Last modification on : Sunday, June 26, 2022 - 5:03:58 AM
Long-term archiving on: : Monday, December 11, 2017 - 6:00:16 PM


Files produced by the author(s)


  • HAL Id : hal-01580879, version 1


William N Havard, Laurent Besacier, Olivier Rosec. SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO Data Set. Grounding Language Understanding GLU2017 Workshop (Satellite of Interspeech 2017), Aug 2017, Stockholm, Sweden. ⟨hal-01580879⟩



Record views


Files downloads