Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events

Résumé

Audiovisual representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. We show that the learnt representations are useful for classifying events and localizing their characteristic audiovisual elements. The system is trained using only video-level event labels without any timing information. An important feature of our method is its capacity to learn from unsynchronized audiovisual events. We achieve state-of-the-art results on a large-scale dataset of weakly-labeled audio event videos. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.
Fichier principal
Vignette du fichier
1804.07345.pdf (2.93 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02713307 , version 1 (01-06-2020)

Identifiants

  • HAL Id : hal-02713307 , version 1

Citer

Sanjeel Parekh, Slim Essid, Alexey Ozerov, Ngoc Q K Duong, Patrick Pérez, et al.. Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events. CVPR Workshop, 2018, Salt Lake city, United States. ⟨hal-02713307⟩
26 Consultations
52 Téléchargements

Partager

Gmail Facebook X LinkedIn More