Improving Multimodal fusion via Mutual Dependency Maximisation - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Improving Multimodal fusion via Mutual Dependency Maximisation

Résumé

Multimodal sentiment analysis is a trending area of research, and multimodal fusion is one of its most active topic. Acknowledging humans communicate through a variety of channels (i.e visual, acoustic, linguistic), multimodal systems aim at integrating different unimodal representations into a synthetic one. So far, a consequent effort has been made on developing complex architectures allowing the fusion of these modalities. However, such systems are mainly trained by minimising simple losses such as L1 or cross-entropy. In this work, we investigate unexplored penalties and propose a set of new objectives that measure the dependency between modalities. We demonstrate that our new penalties lead to a consistent improvement (up to 4.3 on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: CMU-MOSI and CMU-MOSEI. Our method not only achieves a new SOTA on both datasets but also produces representations that are more robust to modality drops. Finally, a by-product of our methods includes a statistical network which can be used to interpret the high dimensional representations learnt by the model.
Fichier principal
Vignette du fichier
2021.emnlp-main.21.pdf (580.96 Ko) Télécharger le fichier
Origine : Fichiers éditeurs autorisés sur une archive ouverte

Dates et versions

hal-03574609 , version 1 (04-01-2024)

Identifiants

Citer

Pierre Colombo, Emile Chapuis, Matthieu Labeau, Chloé Clavel. Improving Multimodal fusion via Mutual Dependency Maximisation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Nov 2021, Online and Punta Cana, France. pp.231-245, ⟨10.18653/v1/2021.emnlp-main.21⟩. ⟨hal-03574609⟩
26 Consultations
9 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More