Multimodal Data Fusion for Video Scene Segmentation - Archive ouverte HAL Accéder directement au contenu
Poster De Conférence Année : 2005

Multimodal Data Fusion for Video Scene Segmentation

Résumé

Automatic video segmentation into semantic units is important to organize an effective content based access to long video. The basic building blocks of professional video are shots. However the semantic meaning they provide is of a too low level. In this paper we focus on the problem of video segmentation into more meaningful high-level narrative units called scenes – aggregates of shots that are temporally continuous, share the same physical settings or represent continuous ongoing action. A statistical video scene segmentation framework is proposed which is capable to combine multiple mid-level features in a symmetrical and scalable manner. Two kinds of such features extracted in visual and audio domain are suggested. The results of experimental evaluations carried out on ground truth video are reported. They show that our algorithm effectively fuses multiple modalities with higher performance as compared with an alternative conventional fusion technique.

Dates et versions

hal-01589294 , version 1 (18-09-2017)

Identifiants

Citer

Vyacheslav Parshin, Aliaksandr Paradzinets, Liming Chen. Multimodal Data Fusion for Video Scene Segmentation. 8th International Conference on Advances in Visual Information Systems, Jul 2005, Amsterdam, Netherlands. Springer, pp.279-289, 2005, ⟨10.1007/11590064_25⟩. ⟨hal-01589294⟩
196 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More