Skip to Main content Skip to Navigation
Conference papers

Semantic and Visual Similarities for Efficient Knowledge Transfer in CNN Training

Abstract : In recent years, representation learning approaches have disrupted many multimedia computing tasks. Among those approaches, deep convolutional neural networks (CNNs) have notably reached human level expertise on some constrained image classification tasks. Nonetheless, training CNNs from scratch for new task or simply new data turns out to be complex and time-consuming. Recently, transfer learning has emerged as an effective methodology for adapting pre-trained CNNs to new data and classes, by only retraining the last classification layer. This paper focuses on improving this process, in order to better transfer knowledge between CNN architectures for faster trainings in the case of fine tuning for image classification. This is achieved by combining and transfering supplementary weights, based on similarity considerations between source and target classes. The study includes a comparison between semantic and content-based similarities, and highlights increased initial performances and training speed, along with superior long term performances when limited training samples are available.
Complete list of metadata
Contributor : Lucas Pascal Connect in order to contact the contributor
Submitted on : Monday, December 7, 2020 - 5:20:18 PM
Last modification on : Monday, January 11, 2021 - 5:02:40 PM


Files produced by the author(s)




Lucas Pascal, Xavier Bost, Benoît Huet. Semantic and Visual Similarities for Efficient Knowledge Transfer in CNN Training. International Conference on Content-Based Multimedia Indexing (CBMI), Sep 2019, Dublin, Ireland. ⟨10.1109/CBMI.2019.8877391⟩. ⟨hal-02285234v2⟩



Record views


Files downloads