Rotation invariant CNN using scattering transform for image classification

Abstract : Deep convolutional neural networks accuracy is heavily impacted by rotations of the input data. In this paper, we propose a convolutional predictor that is invariant to rotations in the input. This architecture is capable of predicting the angular orientation without angle-annotated data. Furthermore, the predictor maps continuously the random rotation of the input to a circular space of the prediction. For this purpose, we use the roto-translation properties existing in the Scattering Transform Networks with a series of 3D Convolutions. We validate the results by training with upright and randomly rotated samples. This allows further applications of this work on fields like automatic re-orientation of randomly oriented datasets.
Liste complète des métadonnées
Contributeur : Rosemberg Rodriguez Salas <>
Soumis le : mardi 5 février 2019 - 16:20:17
Dernière modification le : samedi 9 février 2019 - 12:21:04


  • HAL Id : hal-02008378, version 1


Rosemberg Rodriguez Salas, Eva Dokladalova, Petr Dokládal. Rotation invariant CNN using scattering transform for image classification. 2019. 〈hal-02008378〉



Consultations de la notice