The Sweet-Home speech and multimodal corpus for home automation interaction

Abstract : Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.
Liste complète des métadonnées

Cited literature [21 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-00953006
Contributor : Michel Vacher <>
Submitted on : Wednesday, June 4, 2014 - 2:11:09 PM
Last modification on : Monday, February 11, 2019 - 4:36:02 PM
Document(s) archivé(s) le : Thursday, September 4, 2014 - 10:40:43 AM

File

2014_LREC_Vacher_final.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-00953006, version 1

Collections

Citation

Michel Vacher, Benjamin Lecouteux, Pedro Chahuara, François Portet, Brigitte Meillon, et al.. The Sweet-Home speech and multimodal corpus for home automation interaction. The 9th edition of the Language Resources and Evaluation Conference (LREC), May 2014, Reykjavik, Iceland. pp.4499-4506. ⟨hal-00953006⟩

Share

Metrics

Record views

527

Files downloads

344