HAL will be down for maintenance from Friday, June 10 at 4pm through Monday, June 13 at 9am. More information
Skip to Main content Skip to Navigation
Journal articles

Multi-Level Sensor Fusion with Deep Learning

Valentin Vielzeuf 1, 2 Alexis Lechervy 2 Stéphane Pateux 1 Frédéric Jurie 2
2 Equipe Image - Laboratoire GREYC - UMR6072
GREYC - Groupe de Recherche en Informatique, Image et Instrumentation de Caen
Abstract : In the context of deep learning, this article presents an original deep network, namely CentralNet, for the fusion of information coming from different sensors. This approach is designed to efficiently and automatically balance the trade-off between early and late fusion (i.e. between the fusion of low-level vs high-level information). More specifically, at each level of abstraction-the different levels of deep networks-uni-modal representations of the data are fed to a central neural network which combines them into a common embedding. In addition, a multi-objective regularization is also introduced, helping to both optimize the central network and the unimodal networks. Experiments on four multimodal datasets not only show state-of-the-art performance, but also demonstrate that CentralNet can actually choose the best possible fusion strategy for a given problem.
Document type :
Journal articles
Complete list of metadata

Contributor : Frederic Jurie Connect in order to contact the contributor
Submitted on : Friday, November 2, 2018 - 6:48:46 AM
Last modification on : Wednesday, November 3, 2021 - 5:11:29 AM
Long-term archiving on: : Sunday, February 3, 2019 - 12:31:25 PM


Files produced by the author(s)


  • HAL Id : hal-01910858, version 1
  • ARXIV : 1811.02447


Valentin Vielzeuf, Alexis Lechervy, Stéphane Pateux, Frédéric Jurie. Multi-Level Sensor Fusion with Deep Learning. IEEE Sensors Letters, IEEE, 2018, 3 (1). ⟨hal-01910858⟩



Record views


Files downloads