Download file
Download file
2 files

Compressing the Activation Maps in Deep Convolutional Neural Networks and the Regularization Effect of Compression

posted on 2023-01-24, 19:15 authored by Minh VuMinh Vu, Anders Garpebring, Tufve Nyholm, Tommy Löfstedt

Deep learning has dramatically improved performance in various image analysis applications in the last few years. However, recent deep learning architectures can be very large, with up to hundreds of layers and millions or even billions of model parameters that are impossible to fit into commodity graphics processing units. We propose a novel approach for compressing high-dimensional activation maps, the most memory-consuming part when training modern deep learning architectures. To this end, we also evaluated three different methods to compress the activation maps: Wavelet Transform, Discrete Cosine Transform, and Simple Thresholding. We performed experiments in two classification tasks for natural images and two semantic segmentation tasks for medical images. Using the proposed method, we could reduce the memory usage for activation maps by up to 95%. Additionally, we show that the proposed method induces a regularization effect that acts on the layer weight gradients. 


Email Address of Submitting Author

Submitting Author's Institution

Umeå University

Submitting Author's Country

  • Sweden