Skip to Main content Skip to Navigation
Poster communications

Multimodal Data Fusion for Video Scene Segmentation

Abstract : Automatic video segmentation into semantic units is important to organize an effective content based access to long video. The basic building blocks of professional video are shots. However the semantic meaning they provide is of a too low level. In this paper we focus on the problem of video segmentation into more meaningful high-level narrative units called scenes – aggregates of shots that are temporally continuous, share the same physical settings or represent continuous ongoing action. A statistical video scene segmentation framework is proposed which is capable to combine multiple mid-level features in a symmetrical and scalable manner. Two kinds of such features extracted in visual and audio domain are suggested. The results of experimental evaluations carried out on ground truth video are reported. They show that our algorithm effectively fuses multiple modalities with higher performance as compared with an alternative conventional fusion technique.
Document type :
Poster communications
Complete list of metadata
Contributor : Équipe gestionnaire des publications SI LIRIS Connect in order to contact the contributor
Submitted on : Monday, September 18, 2017 - 2:14:55 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:05 PM

Links full text



Vyacheslav Parshin, Aliaksandr Paradzinets, Liming Chen. Multimodal Data Fusion for Video Scene Segmentation. 8th International Conference on Advances in Visual Information Systems, Jul 2005, Amsterdam, Netherlands. Springer, pp.279-289, 2005, ⟨10.1007/11590064_25⟩. ⟨hal-01589294⟩



Record views