loading page

A Smoothed LASSO Based DNN Sparsification Technique
  • Nitin Chandrachoodan ,
  • Basava Naga Girish Koneru ,
  • Vinita Vasudevan
Nitin Chandrachoodan
Indian Institute of Technology Madras, Indian Institute of Technology Madras

Corresponding Author:[email protected]

Author Profile
Basava Naga Girish Koneru
Author Profile
Vinita Vasudevan
Author Profile


Deep Neural Networks (DNNs) are increasingly being used in a variety of applications. However, DNNs have huge computational and memory requirements. One way to reduce these requirements is to sparsify DNNs by using smoothed LASSO (Least Absolute Shrinkage and Selection Operator) functions. In this paper, we show that for the same maximum error with respect to the LASSO function, the sparsity values obtained using various smoothed LASSO functions are similar. We also propose a layer-wise DNN pruning algorithm, where the layers are pruned based on their individual allocated accuracy loss budget determined by estimates of the reduction in number of multiply-accumulate operations (in convolutional layers) and weights (in fully connected layers). Further, the structured LASSO variants in both convolutional and fully connected layers are explored within the smoothed LASSO framework and the tradeoffs involved are discussed. The efficacy of proposed algorithm in enhancing the sparsity within the allowed degradation in DNN accuracy and results obtained on structured LASSO variants are shown on MNIST, SVHN, CIFAR-10, and Imagenette datasets.
Oct 2021Published in IEEE Transactions on Circuits and Systems I: Regular Papers volume 68 issue 10 on pages 4287-4298. 10.1109/TCSI.2021.3097765