Fusion of Multiple Visual Cues for Visual Saliency Extraction from Wearable Camera Settings with Strong Motion - Archive ouverte HAL Access content directly
Conference Papers ECCV 2012 Year : 2012

Fusion of Multiple Visual Cues for Visual Saliency Extraction from Wearable Camera Settings with Strong Motion

Abstract

In this paper we are interested in the saliency of visual content from wearable cameras. The subjective saliency in wearable video is studied first due to the psycho-visual experience on this content. Then the method for objective saliency map computation with a specific contribution based on geometrical saliency is proposed. Fusion of spatial, temporal and geometric cues in an objective saliency map is realized by the multiplicative operator. Resulting objective saliency maps are evaluated against the subjective maps with promising results, highlighting interesting performance of proposed geometric saliency model.
Fichier principal
Vignette du fichier
eccv2012cameraready.pdf (361.96 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-00742089 , version 1 (15-10-2012)

Identifiers

Cite

Hugo Boujut, Jenny Benois-Pineau, Rémi Mégret. Fusion of Multiple Visual Cues for Visual Saliency Extraction from Wearable Camera Settings with Strong Motion. 12th European Conference on Computer Vision (ECCV 2012), Oct 2012, Firenze, Italy. pp.436-445, ⟨10.1007/978-3-642-33885-4_44⟩. ⟨hal-00742089⟩
160 View
267 Download

Altmetric

Share

Gmail Facebook X LinkedIn More