Rotation invariant CNN using scattering transform for image classification

Abstract : Deep convolutional neural networks accuracy is heavily impacted by rotations of the input data. In this paper, we propose a convolutional predictor that is invariant to rotations in the input. This architecture is capable of predicting the angular orientation without angle-annotated data. Furthermore, the predictor maps continuously the random rotation of the input to a circular space of the prediction. For this purpose, we use the roto-translation properties existing in the Scattering Transform Networks with a series of 3D Convolutions. We validate the results by training with upright and randomly rotated samples. This allows further applications of this work on fields like automatic re-orientation of randomly oriented datasets.
Liste complète des métadonnées
Contributor : Rosemberg Rodriguez Salas <>
Submitted on : Tuesday, February 5, 2019 - 4:20:17 PM
Last modification on : Tuesday, March 19, 2019 - 11:43:25 PM


  • HAL Id : hal-02008378, version 1


Rosemberg Rodriguez Salas, Eva Dokladalova, Petr Dokládal. Rotation invariant CNN using scattering transform for image classification. 2019. ⟨hal-02008378⟩



Record views