Weight Reparametrization for Budget-Aware Network Pruning - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Weight Reparametrization for Budget-Aware Network Pruning

Résumé

Pruning seeks to design lightweight architectures by removing redundant weights in overparameterized networks. Most of the existing techniques first remove structured subnetworks (filters, channels,...) and then fine-tune the resulting networks to maintain a high accuracy. However, removing a whole structure is a strong topological prior and recovering the accuracy, with fine-tuning, is highly cumbersome. In this paper, we introduce an "end-to-end" lightweight network design that achieves training and pruning simultaneously without fine-tuning. The design principle of our method relies on reparametrization that learns not only the weights but also the topological structure of the lightweight sub-network. This reparametrization acts as a prior (or regularizer) that defines pruning masks implicitly from the weights of the underlying network, without increasing the number of training parameters. Sparsity is induced with a budget loss that provides an accurate pruning. Extensive experiments conducted on the CIFAR10 and the TinyImageNet datasets, using standard architectures (namely Conv4, VGG19 and ResNet18), show compelling results without fine-tuning.
Fichier principal
Vignette du fichier
paperA.pdf (246.86 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03431309 , version 1 (16-11-2021)

Identifiants

Citer

Robin Dupont, Hichem Sahbi, Guillaume Michel. Weight Reparametrization for Budget-Aware Network Pruning. IEEE International Conference on Image Processing (ICIP), Sep 2021, Anchorage, AK (virtual), United States. pp.789-793, ⟨10.1109/ICIP42928.2021.9506265⟩. ⟨hal-03431309⟩
40 Consultations
86 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More