Energy Efficient Hardware Acceleration of Neural Networks with
Power-of-Two Quantisation
Abstract
Deep neural networks virtually dominate the domain of most modern vision
systems, providing high performance at a cost of increased computational
complexity. Since for those systems it is often required to operate both
in real-time and with minimal energy consumption (e.g., for wearable
devices or autonomous vehicles, edge Internet of Things (IoT), sensor
networks), various network optimisation techniques are used, e.g.,
quantisation, pruning, or dedicated lightweight architectures. Due to
the logarithmic distribution of weights in neural network layers, a
method providing high performance with significant reduction in
computational precision (for 4-bit weights and less) is the Power-of-Two
(PoT) quantisation (and therefore also with a logarithmic distribution).
This method introduces additional possibilities of replacing the typical
for neural networks Multiply and ACcumulate (MAC – performing, e.g.,
convolution operations) units, with more energy-efficient Bitshift and
ACcumulate (BAC). In this paper, we show that a hardware neural network
accelerator with PoT weights implemented on the Zynq UltraScale + MPSoC
ZCU104 SoC FPGA can be at least 1.4x more energy efficient than the
uniform quantisation version. To further reduce the actual power
requirement by omitting part of the computation for zero weights, we
also propose a new pruning method adapted to logarithmic quantisation.