Skip to Main content Skip to Navigation
Conference papers

Audiovisual speaker diarization of TV series

Abstract : Speaker diarization may be difficult to achieve when applied to narrative films, where speakers usually talk in adverse acoustic conditions: background music, sound effects, wide variations in intonation may hide the inter-speaker variability and make audio-based speaker diarization approaches error prone. On the other hand, such fictional movies exhibit strong regularities at the image level, particularly within dialogue scenes. In this paper, we propose to perform speaker diarization within dialogue scenes of TV series by combining the audio and video modalities: speaker diarization is first performed by using each modality; the two resulting partitions of the instance set are then optimally matched, before the remaining instances, corresponding to cases of disagreement between both modalities, are finally processed. The results obtained by applying such a multi-modal approach to fictional films turn out to outperform those obtained by relying on a single modality.
Complete list of metadata

Cited literature [16 references]  Display  Hide  Download
Contributor : Xavier Bost <>
Submitted on : Sunday, December 23, 2018 - 1:53:47 PM
Last modification on : Thursday, February 6, 2020 - 3:46:05 PM
Long-term archiving on: : Sunday, March 24, 2019 - 1:07:11 PM


Files produced by the author(s)




Xavier Bost, Georges Linarès, Serigne Gueye. Audiovisual speaker diarization of TV series. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr 2015, Brisbane, Australia. pp.4799-4803, ⟨10.1109/ICASSP.2015.7178882⟩. ⟨hal-01313080v2⟩



Record views


Files downloads