Multiscale Fully Convolutional DenseNet for Semantic Segmentation

Abstract : In the computer vision field, semantic segmentation represents a very interesting task. Convolutional Neural Network methods have shown their great performances in comparison with other semantic segmentation methods. In this paper, we propose a multiscale fully convolutional DenseNet approach for semantic segmentation. Our approach is based on the successful fully convolutional DenseNet method. It is reinforced by integrating a multiscale kernel prediction after the last dense block which performs model averaging over different spatial scales and provides more flexibility of our network to presume more information. Experiments on two semantic segmentation benchmarks: CamVid and Cityscapes have shown the effectiveness of our approach which has outperformed many recent works.
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download
Contributor : Alexandre Benoit <>
Submitted on : Sunday, May 6, 2018 - 4:15:14 PM
Last modification on : Sunday, May 20, 2018 - 1:06:09 AM
Long-term archiving on: Tuesday, September 25, 2018 - 3:11:52 PM


Files produced by the author(s)


  • HAL Id : hal-01786688, version 1



Sourour Brahimi, Najib Ben Aoun, Chokri Ben Amar, A Benoit, Patrick Lambert. Multiscale Fully Convolutional DenseNet for Semantic Segmentation. WSCG 2018, International Conference on Computer Graphics, Visualization and Computer Vision, May 2018, Pilsen, Czech Republic. ⟨hal-01786688⟩



Record views


Files downloads