Local features for RGBD image matching under viewpoint changes

Abstract : In the last five-to-ten years, 3D acquisition has emerged in many practical areas thanks to new technologies that enable a massive generation of texture+depth (RGBD) visual content, including infrared sensors Microsoft Kinect, Asus Xtion, Intel RealSense, Google Tango, laser 3D scanners (LIDARs). The increasing availability of this enriched visual modality, combining both photometric and geometric information about the observed scene, opens up new horizons for different classic problems in vision, robotics and multimedia. In this thesis, we address the task of establishing local visual correspondences in images, which is a basic task that numerous higher-level problems are settled with. The local correspondences are commonly found through local visual features. While these have been exhaustively studied for traditional images, little work has been done so far for the case of RGBD content. This thesis begins with a study of the invariance of existing local feature extraction techniques to different visual deformations. It is known that the traditional photometric local features that do not rely on any kind of geometrical information may be robust to various in-plane transformations, but are highly sensible to perspective distortions caused by viewpoint changes and local 3D transformations of the surface. Yet, those visual deformations are widely present in real-world applications. Based on this insight, we attempt to eliminate this vulnerability in the case of texture+depth input, by properly embedding the complementary geometrical information into the first two stages of the feature extraction process: repeatable interesting point detection and distinctive local descriptor computation. With this objective, we contribute with several new approaches of keypoint detection and descriptor extraction, that preserve the conventional degree of keypoint covariance and descriptor invariance to in-plane visual deformations, but aim at improved stability to out-of-plane (3D) transformations in comparison to existing texture-only and texture+depth local features. In order to assess the performance of the proposed approaches, we revisit a classic feature repeatability and discriminability evaluation procedure, taking into account the extended modality of the input. Along with this, we conduct experiments using application-level scenarios on RGBD datasets acquired with Kinect sensors. The results show the advantages of the new proposed RGBD local features in terms of stability under viewpoint changes.
Liste complète des métadonnées

Cited literature [143 references]  Display  Hide  Download

https://tel.archives-ouvertes.fr/tel-01483314
Contributor : Maxim Karpushin <>
Submitted on : Sunday, March 5, 2017 - 10:24:17 AM
Last modification on : Wednesday, February 20, 2019 - 2:41:49 PM
Document(s) archivé(s) le : Tuesday, June 6, 2017 - 12:16:50 PM

File

Identifiers

  • HAL Id : tel-01483314, version 1

Relations

Citation

Maxim Karpushin. Local features for RGBD image matching under viewpoint changes. Computer Vision and Pattern Recognition [cs.CV]. Télécom ParisTech, 2016. English. ⟨tel-01483314⟩

Share

Metrics

Record views

562

Files downloads

413