MUTAN: Multimodal Tucker Fusion for Visual Question Answering

Hedi Ben-Younes 1 Remi Cadene 1 Matthieu Cord 1 Nicolas Thome 2
1 MLIA - Machine Learning and Information Access
LIP6 - Laboratoire d'Informatique de Paris 6
2 CEDRIC - MSDMA - CEDRIC. Méthodes statistiques de data-mining et apprentissage
CEDRIC - Centre d'études et de recherche en informatique et communications
Abstract : Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how the Tucker decomposition framework generalizes some of the latest VQA architectures, providing state-of-the-art results.
Complete list of metadatas

Cited literature [32 references]  Display  Hide  Download

https://hal.sorbonne-universite.fr/hal-02073637
Contributor : Remi Cadene <>
Submitted on : Tuesday, March 26, 2019 - 6:05:46 PM
Last modification on : Tuesday, May 14, 2019 - 11:03:53 AM
Long-term archiving on : Thursday, June 27, 2019 - 6:21:04 PM

File

1705.06676.pdf
Files produced by the author(s)

Identifiers

Citation

Hedi Ben-Younes, Remi Cadene, Matthieu Cord, Nicolas Thome. MUTAN: Multimodal Tucker Fusion for Visual Question Answering. 2017 IEEE International Conference on Computer Vision (ICCV), Oct 2017, Venice, Italy. pp.2631-2639, ⟨10.1109/ICCV.2017.285⟩. ⟨hal-02073637⟩

Share

Metrics

Record views

53

Files downloads

25