Skip to Main content Skip to Navigation
New interface
Conference papers

The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods

Abstract : A recent line of work showed that various forms of convolutional kernel methods can be competitive with standard supervised deep convolutional networks on datasets like CIFAR-10, obtaining accuracies in the range of 87-90% while being more amenable to theoretical analysis. In this work, we highlight the importance of a data-dependent feature extraction step that is key to obtain good performance in convolutional kernel methods. This step typically corresponds to a whitened dictionary of patches, and gives rise to a data-driven convolutional kernel methods. We extensively study its effect, demonstrating it is the key ingredient for high performance of these methods. Specifically, we show that one of the simplest instances of such kernel methods, based on a single layer of image patches followed by a linear classifier is already obtaining classification accuracies on CIFAR-10 in the same range as previous more sophisticated convolutional kernel methods. We scale this method to the challenging ImageNet dataset, showing such a simple approach can exceed all existing non-learned representation methods. This is a new baseline for object recognition without representation learning methods, that initiates the investigation of convolutional kernel models on ImageNet. We conduct experiments to analyze the dictionary that we used, our ablations showing they exhibit low-dimensional properties.
Complete list of metadata
Contributor : Edouard Oyallon Connect in order to contact the contributor
Submitted on : Tuesday, January 19, 2021 - 10:27:53 AM
Last modification on : Friday, March 18, 2022 - 3:37:46 AM
Long-term archiving on: : Tuesday, April 20, 2021 - 6:13:27 PM


  • HAL Id : hal-03114389, version 1
  • ARXIV : 2101.07528


Louis Thiry, Michael Arbel, Eugene Belilovsky, Edouard Oyallon. The Unreasonable Effectiveness of Patches in Deep Convolutional Kernels Methods. International Conference on Learning Representation (ICLR 2021), 2021, Vienna (online), Austria. ⟨hal-03114389⟩



Record views


Files downloads