The Sweet-Home speech and multimodal corpus for home automation interaction - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

The Sweet-Home speech and multimodal corpus for home automation interaction

Résumé

Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automation sensors, in which participants performed Activities of Daily living (ADL). This corpus is made of a multimodal subset, a French home automation speech subset recorded in Distant Speech conditions, and two interaction subsets, the first one being recorded by 16 persons without disabilities and the second one by 6 seniors and 5 visually impaired people. This corpus was used in studies related to ADL recognition, context aware interaction and distant speech recognition applied to home automation controled through voice.
Fichier principal
Vignette du fichier
2014_LREC_Vacher_final.pdf (166.95 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00953006 , version 1 (04-06-2014)

Identifiants

  • HAL Id : hal-00953006 , version 1

Citer

Michel Vacher, Benjamin Lecouteux, Pedro Chahuara, François Portet, Brigitte Meillon, et al.. The Sweet-Home speech and multimodal corpus for home automation interaction. The 9th edition of the Language Resources and Evaluation Conference (LREC), May 2014, Reykjavik, Iceland. pp.4499-4506. ⟨hal-00953006⟩
543 Consultations
389 Téléchargements

Partager

Gmail Facebook X LinkedIn More