Using relative head and hand-target features to predict intention in 3D moving-target selection

Abstract : Selection of moving targets is a common, yet complex task in human-computer interaction (HCI) and virtual reality (VR). Predicting user intention may be beneficial to address the challenges inherent in interaction techniques for moving-target selection. This article extends previous models by integrating relative head-target and hand-target features to predict intended moving targets. The features are calculated in a time window ending at roughly two-thirds of the total target selection time and evaluated using decision trees. With two targets, this model is able to predict user choice with up to ~ 72% accuracy on general moving-target selection tasks and up to ~ 78% by also including task-related target properties.
Type de document :
Communication dans un congrès
IEEE Virtual Reality, Mar 2014, Minneapolis, Minnesota, United States. IEEE, pp.51-56, 2014, <10.1109/VR.2014.6802050>
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01133927
Contributeur : Compte de Service Administrateur Ensam <>
Soumis le : vendredi 20 mars 2015 - 16:26:44
Dernière modification le : samedi 21 mars 2015 - 01:02:09
Document(s) archivé(s) le : lundi 22 juin 2015 - 07:19:00

Fichier

LE2I_VR_2014_CASALLAS.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

Collections

Citation

Juan Sebastian Casallas, James H. Oliver, Jonathan W. Kelly, Frédéric Merienne, Samir Garbaya. Using relative head and hand-target features to predict intention in 3D moving-target selection. IEEE Virtual Reality, Mar 2014, Minneapolis, Minnesota, United States. IEEE, pp.51-56, 2014, <10.1109/VR.2014.6802050>. <hal-01133927>

Partager

Métriques

Consultations de
la notice

61

Téléchargements du document

92