Visual attention modeling for stereoscopic video

Abstract : In this paper, we propose a computational model of visual attention for stereoscopic video. Low-level visual features including color, luminance, texture and depth are used to calculate feature contrast for spatial saliency of stereoscopic video frames. Besides, the proposed model adopts motion features to compute the temporal saliency. Here, we extract the relative planar and depth motion for temporal saliency calculation. The final saliency map is computed by fusing the spatial and temporal saliency together. Experimental results show the promising performance of the proposed method in saliency prediction for stereoscopic video.
Document type :
Conference papers
Complete list of metadatas

Cited literature [25 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-01438315
Contributor : Matthieu Perreira da Silva <>
Submitted on : Tuesday, January 17, 2017 - 3:53:28 PM
Last modification on : Wednesday, September 11, 2019 - 11:00:02 AM
Long-term archiving on : Tuesday, April 18, 2017 - 3:00:55 PM

File

w83.pdf
Files produced by the author(s)

Identifiers

Collections

Citation

Yuming Fang, Chi Zhang, Jing Li, Matthieu Perreira da Silva, Patrick Le Callet. Visual attention modeling for stereoscopic video. 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Jul 2016, Seattle, United States. pp.1 - 6, ⟨10.1109/ICMEW.2016.7574768⟩. ⟨hal-01438315⟩

Share

Metrics

Record views

401

Files downloads

355