The Usability of Speech and/or Gestures in Multi-Modal Interface Systems

Abstract : Multi-Modal Interface Systems (MMIS) have proliferated in the last few decades, since they provide a direct interface for both Human Computer Interaction (HCI) and face-to-face communication. Our aim is to provide users without any prior 3D modelling experience, with a multi-modal interface to create a 3D object. The system also incorporates help throughout the drawing process and identifies simple words and gestures to accomplish a range of (simple to complex) modeling tasks. We have developed a multi-modal interface that allows users to design objects in 3D, using AutoCAD commands as well as speech and gesture. We have used a microphone to collect speech input and a Leap Motion sensor to collect gesture input in real time. Two sets of experiments were conducted to investigate the usability of the system and evaluate the system performance using Leap Motion versus keyboard and mouse. Our results indicate that performing a task using speech is perceived exhausting, when there is no shared vocabulary between man and machine, and the usability of traditional input devices supersedes the usability of speech and gestures. Only a small ratio of participants, less than 7% in our experiments were able to carry out the tasks with appropriate precision.
Type de document :
Communication dans un congrès
International Conference on Computer and Automation Engineering (ICCAE 2017), Feb 2017, Sydney, Australia. 2017 9th International Conference on Computer and Automation Engineering, pp.1-5, 2017
Liste complète des métadonnées

Littérature citée [11 références]  Voir  Masquer  Télécharger

https://hal.archives-ouvertes.fr/hal-01502555
Contributeur : Compte de Service Administrateur Ensam <>
Soumis le : mercredi 5 avril 2017 - 17:01:14
Dernière modification le : mardi 10 octobre 2017 - 13:50:07
Document(s) archivé(s) le : jeudi 6 juillet 2017 - 13:53:24

Fichier

LE2I_ICCAE_2017_CHARDONNET.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01502555, version 1
  • ENSAM : http://hdl.handle.net/10985/11682

Collections

Citation

Farzana Alibay, Manolya Kavakli, Jean-Rémy Chardonnet, Muhammad Zeeshan Baig. The Usability of Speech and/or Gestures in Multi-Modal Interface Systems. International Conference on Computer and Automation Engineering (ICCAE 2017), Feb 2017, Sydney, Australia. 2017 9th International Conference on Computer and Automation Engineering, pp.1-5, 2017. 〈hal-01502555〉

Partager

Métriques

Consultations de
la notice

80

Téléchargements du document

6