Energy-efficient neural network learning with accuracy-adjustable
floating-point multiplier
Abstract
This paper proposes a novel approximate bfloat16 multiplier with
on-the-fly adjustable accuracy for energy-efficient learning in deep
neural networks. The size of the proposed multiplier is only 62% of the
size of the exact bfloat16 multiplier. Furthermore, its energy footprint
is up to five times smaller than the footprint of the exact bfloat
multiplier. We demonstrate the advantages of the proposed multiplier in
deep neural network learning, where we successfully train the ResNet-20
network on the CIFAR-10 dataset from scratch.