loading page

Compressing the Activation Maps in Deep Convolutional Neural Networks and the Regularization Effect of Compression
  • +1
  • Minh Vu ,
  • Anders Garpebring ,
  • Tufve Nyholm ,
  • Tommy Löfstedt
Minh Vu
Umeå University

Corresponding Author:[email protected]

Author Profile
Anders Garpebring
Author Profile
Tufve Nyholm
Author Profile
Tommy Löfstedt
Author Profile


Deep learning has dramatically improved performance in various image analysis applications in the last few years. However, recent deep learning architectures can be very large, with up to hundreds of layers and millions or even billions of model parameters that are impossible to fit into commodity graphics processing units. We propose a novel approach for compressing high-dimensional activation maps, the most memory-consuming part when training modern deep learning architectures. To this end, we also evaluated three different methods to compress the activation maps: Wavelet Transform, Discrete Cosine Transform, and Simple Thresholding. We performed experiments in two classification tasks for natural images and two semantic segmentation tasks for medical images. Using the proposed method, we could reduce the memory usage for activation maps by up to 95%. Additionally, we show that the proposed method induces a regularization effect that acts on the layer weight gradients.