RIU-Net: Embarrassingly simple semantic segmentation of 3D LiDAR point cloud - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

RIU-Net: Embarrassingly simple semantic segmentation of 3D LiDAR point cloud

Résumé

This paper proposes RIU-Net (for Range-Image U-Net), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D range-image by exploiting the topology of the sensor. This image is then used as input to a U-net. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We propose to demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud. Our model is trained on range-images built from KITTI 3D object detection dataset. Experiments show that RIU-Net, despite being very simple, outperforms the state-of-the-art of range-image based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment on low computational power systems such as robots.

Dates et versions

hal-02136459 , version 1 (22-05-2019)

Identifiants

Citer

Pierre Biasutti, Aurélie Bugeau, Jean-François Aujol, Mathieu Brédif. RIU-Net: Embarrassingly simple semantic segmentation of 3D LiDAR point cloud. 2019. ⟨hal-02136459⟩

Collections

CNRS IMB IGN-ENSG
81 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More