Salient object detection based on spatiotemporal attention models

Abstract : In this paper we propose a method for automatic detection of salient objects in video streams. The movie is firstly segmented into shots based on a scale space filtering graph partition method. Next, we introduced a combined spatial and temporal video attention model. The proposed approach combines a region-based contrast saliency measure with a novel temporal attention model. The camera/background motion is determined using a set of homographic transforms, estimated by recursively applying the RANSAC algorithm on the SIFT interest point correspondence, while other types of movements are identified using agglomerative clustering and temporal region consistency. A decision is taken based on the combined spatial and temporal attention models. Finally, we demonstrate how the extracted saliency map can be used to create segmentation masks. The experimental results validate the proposed framework and demonstrate that our approach is effective for various types of videos, including noisy and low resolution data.
Type de document :
Article dans une revue
Consumer Electronics (ICCE), 2013 IEEE International Conference, 2013, pp.39 - 42
Liste complète des métadonnées

https://hal.archives-ouvertes.fr/hal-00944796
Contributeur : Ruxandra Tapu <>
Soumis le : mardi 11 février 2014 - 11:13:10
Dernière modification le : jeudi 9 février 2017 - 15:20:48

Identifiants

  • HAL Id : hal-00944796, version 1

Citation

Ruxandra Tapu, Zaharia Titus. Salient object detection based on spatiotemporal attention models. Consumer Electronics (ICCE), 2013 IEEE International Conference, 2013, pp.39 - 42. <hal-00944796>

Partager

Métriques

Consultations de la notice

52