Trunk Pruning: Highly Compatible Channel Pruning for Convolutional Neural Networks Without Fine-Tuning

Nam Joon Kim, Hyun Kim

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Channel pruning can efficiently reduce the computation and memory footprint within a reasonable accuracy drop by removing unnecessary channels from convolutional neural networks (CNNs). Among the various channel pruning approaches, sparsity training is the most popular because of its convenient implementation and end-to-end training. It automatically identifies the optimal network structures by applying regularization to parameters. Although this sparsity training has achieved a remarkable performance in terms of the trade-off between accuracy and network size reduction, it needs to be accompanied by a time-consuming fine-tuning process. Moreover, although activation functions with high performance are being continuously developed, the existing sparsity training does not display remarkable scalability for these new activation functions. To address these problems, this study proposes a novel pruning method, trunk pruning, which can produce a compact network by minimizing the accuracy drop during inference even without the fine-tuning process. In the proposed method, one kernel of the next convolutional layer absorbs all the information of the kernels to be pruned, considering the effects of the batch normalization (BN) shift parameters remaining after the sparsity training. Therefore, it is possible to eliminate the fine-tuning process because trunk pruning can effectively reproduce the output of the unpruned network after the sparsity training by removing the pruning loss. Furthermore, because trunk pruning is a technique that can effectively control only the shift parameters of the BN in the CONV layer, it has the significant advantage of being compatible with all BN-based sparsity training schemes and can address various activation functions.

Original languageEnglish
Pages (from-to)5588-5599
Number of pages12
JournalIEEE Transactions on Multimedia
Volume26
DOIs
StatePublished - 2024

Keywords

  • Convolutional Neural Network (CNN)
  • Fine-Tuning
  • Pruning
  • Regularization

Fingerprint

Dive into the research topics of 'Trunk Pruning: Highly Compatible Channel Pruning for Convolutional Neural Networks Without Fine-Tuning'. Together they form a unique fingerprint.

Cite this