RIU-Net: Embarrassingly simple semantic segmentation of 3D LiDAR point cloud

Abstract : This paper proposes RIU-Net (for Range-Image U-Net), the adaptation of a popular semantic segmentation network for the semantic segmentation of a 3D LiDAR point cloud. The point cloud is turned into a 2D range-image by exploiting the topology of the sensor. This image is then used as input to a U-net. This architecture has already proved its efficiency for the task of semantic segmentation of medical images. We propose to demonstrate how it can also be used for the accurate semantic segmentation of a 3D LiDAR point cloud. Our model is trained on range-images built from KITTI 3D object detection dataset. Experiments show that RIU-Net, despite being very simple, outperforms the state-of-the-art of range-image based methods. Finally, we demonstrate that this architecture is able to operate at 90fps on a single GPU, which enables deployment on low computational power systems such as robots.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas

https://hal.archives-ouvertes.fr/hal-02136459
Contributor : Pierre Biasutti <>
Submitted on : Wednesday, May 22, 2019 - 10:18:08 AM
Last modification on : Thursday, May 23, 2019 - 1:35:39 AM

Links full text

Identifiers

  • HAL Id : hal-02136459, version 1
  • ARXIV : 1905.08748

Collections

Citation

Pierre Biasutti, Aurélie Bugeau, Jean-François Aujol, Mathieu Brédif. RIU-Net: Embarrassingly simple semantic segmentation of 3D LiDAR point cloud. 2019. ⟨hal-02136459⟩

Share

Metrics

Record views

12