A Multiply-And-Max/min Neuron Paradigm for Aggressively Prunable Deep Neural Networks
The growing interest in Internet of Things and mobile Artificial Intelligence applications is pushing the investigation on Deep Neural Networks (DNNs) that can operate at the edge using low-resources/energy devices.
To obtain such a goal, several pruning techniques have been proposed in the literature. They aim to reduce the number of interconnections -- and consequently the size, and the corresponding computing and storage requirements -- of a DNN relying on classic Multiply-and-ACcumulate (MAC) neurons.
In this work, we propose a novel neurons structure based on a Multiply-And-Max/min (MAM) map-reduce paradigm, and we show that by exploiting this new paradigm it is possible to build naturally and aggressively prunable DNN layers, with a negligible loss in performance. In fact, this novel structure allows a greater interconnection sparsity when compared to classic MAC based DNN layers. Moreover, most of the already existing state-of-the-art pruning techniques can be used with MAM layers with little to no changes. As an example, by applying one-shot pruning to a VGG-16 model trained on the ImageNet task, fully connected MAM-based layers need only 0.04% of the total number of interconnections while MAC-based layers need at least 4.33%, with a Top-1 accuracy loss of 3% compared to the maximum achieved accuracy. Additionally, we test Lottery Ticket iterative pruning on AlexNet with CIFAR-100 task. With 0.02% remaining interconnections, the MAC-based model requires 10 training iterations to reach 85% Top-5 accuracy, against 6 iterations with MAM.
History
Email Address of Submitting Author
luciano.prono@polito.itORCID of Submitting Author
0000-0003-1507-9092Submitting Author's Institution
Politecnico di TorinoSubmitting Author's Country
- Italy