The Sweet-Home speech and multimodal corpus for home automation interaction

Abstract : Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.
Complete list of metadatas

Cited literature [21 references]  Display  Hide  Download
Contributor : Michel Vacher <>
Submitted on : Wednesday, June 4, 2014 - 2:11:09 PM
Last modification on : Friday, October 25, 2019 - 1:24:20 AM
Long-term archiving on: Thursday, September 4, 2014 - 10:40:43 AM


Files produced by the author(s)


  • HAL Id : hal-00953006, version 1



Michel Vacher, Benjamin Lecouteux, Pedro Chahuara, François Portet, Brigitte Meillon, et al.. The Sweet-Home speech and multimodal corpus for home automation interaction. The 9th edition of the Language Resources and Evaluation Conference (LREC), May 2014, Reykjavik, Iceland. pp.4499-4506. ⟨hal-00953006⟩



Record views


Files downloads