Modelling spatio-temporal saliency to predict gaze direction for short videos - Archive ouverte HAL Accéder directement au contenu
Article Dans Une Revue International Journal of Computer Vision Année : 2009

Modelling spatio-temporal saliency to predict gaze direction for short videos

Résumé

This paper presents a spatio-temporal saliency model that predicts eye movement during video free viewing. This model is inspired by the biology of the first steps of the human visual system. The model extracts two signals from video stream corresponding to the two main outputs of the retina: parvocellular and magnocellular. Then, both signals are split into elementary feature maps by cortical-like filters. These feature maps are used to form two saliency maps: a static and a dynamic one. These maps are then fused into a spatio-temporal saliency map. The model is evaluated by comparing the salient areas of each frame predicted by the spatio-temporal saliency map to the eye positions of different subjects during a free video viewing experiment with a large database (17000 frames). In parallel, the static and the dynamic pathways are analyzed to understand what is more or less salient and for what type of videos our model is a good or a poor predictor of eye movement.
Fichier principal
Vignette du fichier
09_IJCV_maratf.pdf (1.08 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00368496 , version 1 (16-03-2009)

Identifiants

Citer

Sophie Marat, Tien Ho Phuoc, Lionel Granjon, Nathalie Guyader, Denis Pellerin, et al.. Modelling spatio-temporal saliency to predict gaze direction for short videos. International Journal of Computer Vision, 2009, 82 (3), pp.231-243. ⟨10.1007/s11263-009-0215-3⟩. ⟨hal-00368496⟩
541 Consultations
848 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More