Spatio-temporal modeling of visual attention for stereoscopic 3D video - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2014

Spatio-temporal modeling of visual attention for stereoscopic 3D video

Résumé

Modeling visual attention is an important stage for the optimization of image processing systems nowadays. Several models have been already developed for 2D static and dynamic content, but only few attempts can be found for stereoscopic 3D content. In this work we propose a saliency model for stereoscopic 3D video. This model is based the fusion of three maps i.e. spatial, temporal and depth. It relies on interest point features known for being close to human visual attention. Moreover, since 3D perception is mostly based on monocular cues, depth information is obtained using a monocular model predicting the depth position of objects. Several fusion strategies have been experimented in order to determine the best match for our model. Finally, our approach has been validated using state-of-the-art metrics in comparison to attention maps obtained by eye-tracking experiments, and showed good performance.
Fichier non déposé

Dates et versions

hal-01574564 , version 1 (15-08-2017)

Identifiants

Citer

Iana Iatsun, Mohamed-Chaker Larabi, Christine Fernandez-Maloigne. Spatio-temporal modeling of visual attention for stereoscopic 3D video. IEEE International Conference on Image Processing (ICIP), Oct 2014, Paris, France. pp.5397-5401, ⟨10.1109/ICIP.2014.7026092⟩. ⟨hal-01574564⟩
306 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More