Skip to Main content Skip to Navigation
Conference papers

Sensor fusion for interactive real-scale modeling and simulation systems

Abstract : This paper proposes an accurate sensor fusion scheme for navigation inside a real-scale 3D model by combining audio and video signals. Audio signal of a microphone-array is merged by Minimum Variance Distortion-less Response (MVDR) algorithm and processed instantaneously via Hidden Markov Model (HMM) to generate translation commands by word-to-action module of speech processing system. Then, the output of optical head tracker (four IR cameras) is analyzed by non-linear/non-Gaussian Bayesian algorithm to provide information about the orientation of the user's head. The orientation is used to redirect the user toward a new direction by applying quaternion rotation. The output of these two sensors (video and audio) is combined under the sensor fusion scheme to perform continuous travelling inside the model. The maximum precision for the traveling task is achieved under sensor fusion scheme. Practical experiment shows promising results for the implementation.
Complete list of metadatas

Cited literature [8 references]  Display  Hide  Download
Contributor : Compte de Service Administrateur Ensam <>
Submitted on : Monday, November 4, 2013 - 4:55:34 PM
Last modification on : Monday, March 30, 2020 - 8:54:50 AM
Document(s) archivé(s) le : Friday, April 7, 2017 - 8:46:15 PM


Files produced by the author(s)


  • HAL Id : hal-00879739, version 1
  • ENSAM :


Mohammad Ali Mirzaei, Jean-Rémy Chardonnet, Christian Pere, Frédéric Merienne. Sensor fusion for interactive real-scale modeling and simulation systems. 18th International Conference on Computer Games (CGAMES USA), Jul 2013, United States. pp.149-153. ⟨hal-00879739⟩



Record views


Files downloads