Spatio-temporal Attention Model for Video Content Analysis - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2005

Spatio-temporal Attention Model for Video Content Analysis

Résumé

This paper presents a new model of human attention that allows salient areas to be extracted from video frames. As automatic understanding of video semantic content is still far from being achieved, attention model tends to mimic the focus of the human visual system. Most existing approaches extract the saliency of images in order to be used in multiple applications but they are not compared to human perception. The model described here is achieved by the fusion of a static model inspired by the human system and a model of moving object detection. The static model is divided into two steps: a “retinal” filtering followed by a “cortical” decomposition. The moving object detection is carried out by a compensation of camera motion. Then we compare the attention model output for different videos with human judgment. A psychophysical experiment is proposed to compare the model with visual human perception and to validate it. The experimental results indicate that the model achieves about 88% of precision. This shows the usefulness of the scheme and its potential in future applications.
Fichier principal
Vignette du fichier
05_icip_guironnet.pdf (864.38 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-00365256 , version 1 (02-03-2009)

Identifiants

  • HAL Id : hal-00365256 , version 1

Citer

Mickael Guironnet, Nathalie Guyader, Denis Pellerin, Patricia Ladret. Spatio-temporal Attention Model for Video Content Analysis. IEEE International Conference on Image Processing (ICIP'2005), Sep 2005, Gène, Italy. pp.CD. ⟨hal-00365256⟩
374 Consultations
147 Téléchargements

Partager

Gmail Facebook X LinkedIn More