Skip to Main content Skip to Navigation
Conference papers

A new step to optimize sound localization adaptation through the use of vision

Abstract : Following a previous experiment, this paper presents our ongoing effort to improve a rapid multisensory non-individual HRTF adaptation training method. Motivations to modify our former attempt of an audio-visuo-proprioceptive training are first presented. The paper mainly focuses on two aspects: the inclusion of a non-individual HRTF pre-selection phase, and the design of relevant visual cues, consistent with the auditory cues. The design of an experiment, planned to investigate the effect of vision on the adaptation to non-individual HRTF, is then detailed. We conclude with some expected results and future research directions and a new context of use: the rehabilitation of patients suffering from spatial cognition disorders.
Complete list of metadata
Contributor : Tifanie BOUCHARA Connect in order to contact the contributor
Submitted on : Wednesday, August 26, 2020 - 2:37:43 PM
Last modification on : Friday, August 5, 2022 - 2:54:01 PM


  • HAL Id : hal-02922711, version 1


Tristan-Gaël Bara, Alma Guilbert, Tifanie Bouchara Bouchara. A new step to optimize sound localization adaptation through the use of vision. AES International Conference on Audio for Virtual and Augmented Reality, Aug 2020, Seatle, United States. ⟨hal-02922711⟩



Record views