Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Estimating semantic structure for the VQA answer space

Corentin Kervadec 1, 2 Grigory Antipov 2 Moez Baccouche 2 Christian Wolf 1 
1 imagine - Extraction de Caractéristiques et Identification
LIRIS - Laboratoire d'InfoRmatique en Image et Systèmes d'information
Abstract : Since its appearance, Visual Question Answering (VQA, i.e. answering a question posed over an image), has always been treated as a classification problem over a set of predefined answers. Despite its convenience, this classification approach poorly reflects the semantics of the problem limiting the answering to a choice between independent proposals, without taking into account the similarity between them (e.g. equally penalizing for answering cat or German shepherd instead of dog). We address this issue by proposing (1) two measures of proximity between VQA classes, and (2) a corresponding loss which takes into account the estimated proximity. This significantly improves the generalization of VQA models by reducing their language bias. In particular, we show that our approach is completely model-agnostic since it allows consistent improvements with three different VQA models. Finally, by combining our method with a language bias reduction approach, we report SOTA-level performance on the challenging VQAv2-CP dataset.
Document type :
Preprints, Working Papers, ...
Complete list of metadata
Contributor : Corentin Kervadec Connect in order to contact the contributor
Submitted on : Tuesday, June 9, 2020 - 4:28:28 PM
Last modification on : Tuesday, June 1, 2021 - 2:08:09 PM


Files produced by the author(s)


  • HAL Id : hal-02862763, version 1
  • ARXIV : 2006.05726


Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf. Estimating semantic structure for the VQA answer space. 2020. ⟨hal-02862763⟩



Record views


Files downloads