Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

Estimating semantic structure for the VQA answer space

Abstract : Since its appearance, Visual Question Answering (VQA, i.e. answering a question posed over an image), has always been treated as a classification problem over a set of predefined answers. Despite its convenience, this classification approach poorly reflects the semantics of the problem limiting the answering to a choice between independent proposals, without taking into account the similarity between them (e.g. equally penalizing for answering cat or German shepherd instead of dog). We address this issue by proposing (1) two measures of proximity between VQA classes, and (2) a corresponding loss which takes into account the estimated proximity. This significantly improves the generalization of VQA models by reducing their language bias. In particular, we show that our approach is completely model-agnostic since it allows consistent improvements with three different VQA models. Finally, by combining our method with a language bias reduction approach, we report SOTA-level performance on the challenging VQAv2-CP dataset.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-02862763
Contributor : Corentin Kervadec <>
Submitted on : Tuesday, June 9, 2020 - 4:28:28 PM
Last modification on : Wednesday, July 8, 2020 - 12:43:49 PM

Files

template.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02862763, version 1
  • ARXIV : 2006.05726

Citation

Corentin Kervadec, Grigory Antipov, Moez Baccouche, Christian Wolf. Estimating semantic structure for the VQA answer space. 2020. ⟨hal-02862763⟩

Share

Metrics

Record views

14

Files downloads

10