Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2017

Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network

Résumé

We propose a neural network architecture for depth map inference from monocular stabilized videos with application to UAV videos in rigid scenes. Training is based on a novel synthetic dataset for navigation that mimics aerial footage from gimbal stabilized monocular camera in rigid scenes. Based on this network, we propose a multi-range architecture for unconstrained UAV flight, leveraging flight data from sensors to make accurate depth maps for uncluttered outdoor environment. We try our algorithm on both synthetic scenes and real UAV flight data. Quantitative results are given for synthetic scenes with a slightly noisy orientation, and show that our multi-range architecture improves depth inference. Along with this article is a video that present our results more thoroughly.
Fichier principal
Vignette du fichier
Article ECMR.pdf (1.01 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01587658 , version 1 (14-09-2017)

Identifiants

  • HAL Id : hal-01587658 , version 1

Citer

Clément Pinard, Laure Chevalley, Antoine Manzanera, David Filliat. Multi range Real-time depth inference from a monocular stabilized footage using a Fully Convolutional Neural Network. European Conference on Mobile Robotics, ENSTA ParisTech, Sep 2017, Paris, France. ⟨hal-01587658⟩
461 Consultations
506 Téléchargements

Partager

Gmail Facebook X LinkedIn More