Spatio-temporal modeling of visual attention for stereoscopic 3D video

Abstract : Modeling visual attention is an important stage for the optimization of image processing systems nowadays. Several models have been already developed for 2D static and dynamic content, but only few attempts can be found for stereoscopic 3D content. In this work we propose a saliency model for stereoscopic 3D video. This model is based the fusion of three maps i.e. spatial, temporal and depth. It relies on interest point features known for being close to human visual attention. Moreover, since 3D perception is mostly based on monocular cues, depth information is obtained using a monocular model predicting the depth position of objects. Several fusion strategies have been experimented in order to determine the best match for our model. Finally, our approach has been validated using state-of-the-art metrics in comparison to attention maps obtained by eye-tracking experiments, and showed good performance.
Type de document :
Communication dans un congrès
IEEE International Conference on Image Processing (ICIP), Oct 2014, Paris, France. pp.5397-5401, 〈10.1109/ICIP.2014.7026092〉
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-01574564
Contributeur : Mohamed-Chaker Larabi <>
Soumis le : mardi 15 août 2017 - 16:54:28
Dernière modification le : lundi 25 septembre 2017 - 16:25:21

Identifiants

Collections

Citation

Iana Iatsun, Mohamed-Chaker Larabi, Christine Fernandez-Maloigne. Spatio-temporal modeling of visual attention for stereoscopic 3D video. IEEE International Conference on Image Processing (ICIP), Oct 2014, Paris, France. pp.5397-5401, 〈10.1109/ICIP.2014.7026092〉. 〈hal-01574564〉

Partager

Métriques

Consultations de la notice

31