Superpixel-based spatiotemporal saliency detection

Abstract : This paper proposes a superpixel-based spatiotemporal saliency model for saliency detection in videos. Based on the superpixel representation of video frames, motion histograms and color histograms are extracted at superpixel level as local features and frame level as global features. Then superpixel-level temporal saliency is measured by integrating motion distinctiveness of superpixels with a scheme of temporal saliency prediction and adjustment, and superpixel-level spatial saliency is measured by evaluating global contrast and spatial sparsity of superpixels. Finally, a pixel-level saliency derivation method is used to generate pixel-level temporal saliency map and spatial saliency map, and an adaptive fusion method is exploited to integrate them into the spatiotemporal saliency map. Experimental results on two public datasets demonstrate that the proposed model outperforms six state-of-the-art spatiotemporal saliency models in terms of both saliency detection and human fixation prediction.
Document type :
Journal articles
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-00993218
Contributor : Zhi Liu <>
Submitted on : Monday, May 19, 2014 - 8:32:01 PM
Last modification on : Thursday, November 15, 2018 - 11:57:53 AM

Identifiers

Citation

Zhi Liu, Xiang Zhang, Shuhua Luo, Olivier Le Meur. Superpixel-based spatiotemporal saliency detection. IEEE Transactions on Circuits and Systems for Video Technology, Institute of Electrical and Electronics Engineers, 2014, pp.1. ⟨10.1109/TCSVT.2014.2308642⟩. ⟨hal-00993218⟩

Share

Metrics

Record views

844