Skip to Main content Skip to Navigation
Journal articles

Audio-based visualizing and structuring of videos

Abstract : Enabling a rapid on-the-fly view of the content of a movie requires segmenting the movie and describing the segments in a user-compatible manner. The difficulty resides in extracting relevant semantic information from the audiovisual signal, both for the segmentation and the description. We introduce in this paper audio scenes and audio chapters in movies and present an algorithm for automatically segmenting a video based on the audio stream only. A tree-like audio-based structure of a video is proposed. Each scene at each abstraction level in the structure is classified into different scene categories. The automatic solution to audio scene and chapter segmentation and classification is evaluated on manually segmented and classified videos
Document type :
Journal articles
Complete list of metadata
Contributor : Équipe gestionnaire des publications SI LIRIS Connect in order to contact the contributor
Submitted on : Monday, September 18, 2017 - 4:39:42 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:09 PM

Links full text



Hadi Harb, Liming Chen. Audio-based visualizing and structuring of videos. International Journal on Digital Libraries, Springer Verlag, 2006, 1, 6, pp.70-81. ⟨10.1007/s00799-005-0120-5⟩. ⟨hal-01589556⟩



Record views