SCNet: Learning Semantic Correspondence

Abstract : This paper addresses the problem of establishing semantic correspondences between images depicting different instances of the same object or scene category. Previous approaches focus on either combining a spatial regular-izer with hand-crafted features, or learning a correspondence model for appearance only. We propose instead a convolutional neural network architecture, called SCNet, for learning a geometrically plausible model for semantic correspondence. SCNet uses region proposals as matching primitives, and explicitly incorporates geometric consistency in its loss function. It is trained on image pairs obtained from the PASCAL VOC 2007 keypoint dataset, and a comparative evaluation on several standard benchmarks demonstrates that the proposed approach substantially out-performs both recent deep learning architectures and previous methods based on hand-crafted features.
Type de document :
Communication dans un congrès
International Conference on Computer Vision, Oct 2017, Venise, Italy. 2017
Liste complète des métadonnées


https://hal.archives-ouvertes.fr/hal-01576117
Contributeur : Rafael Sampaio de Rezende <>
Soumis le : mardi 22 août 2017 - 13:18:45
Dernière modification le : mardi 5 septembre 2017 - 08:31:26

Fichier

SCNet_ICCV.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01576117, version 1
  • ARXIV : 1705.04043

Collections

Citation

Kai Han, Rafael Rezende, Bumsub Ham, Kwan-Yee Wong, Minsu Cho, et al.. SCNet: Learning Semantic Correspondence. International Conference on Computer Vision, Oct 2017, Venise, Italy. 2017. <hal-01576117>

Partager

Métriques

Consultations de
la notice

170

Téléchargements du document

52