TechRxiv
Low_Power_FPGA_Neural_Inference_Design_Tool_by_Leveraging_Fault_Detection_into_Low_Voltage_Operation__Copy_ (1).pdf (1.38 MB)
Download file

Low-Voltage Energy Efficient Neural Inference by Leveraging Fault Detection Techniques

Download (1.38 MB)
preprint
posted on 18.11.2021, 06:56 by Mehdi SafarpourMehdi Safarpour, Tommy Z. Deng, John Massingham, Lei Xun, Mohammad Sabokrou, Olli Silven
This paper presents simple techniques to significantly reduced energy consumption of DNNs: Operating at reduced voltages offers substantial energy efficiency improvement but at the expense of increasing the probability of computational errors due to hardware faults. In this context, we targeted Deep Neural Networks (DNN) as emerging energy hungry building blocks in embedded applications. Without an error feedback mechanism, blind voltage down-scaling will result in degraded accuracy or total system failure. To enable safe voltage down-scaling, in this paper two solutions based on Self-Supervised Learning (SSL) and Algorithm Based Fault Tolerance (ABFT) were developed. A DNN model trained on MNIST data-set was deployed on a Field Programmable Gate Array (FPGA) that operated at reduced voltages and employed the proposed schemes. The SSL approach provides extremely low-overhead (≈0.2%) fault detection at the cost of lower error coverage and extra training, while ABFT incurs less than 8%overheads at run-time with close to 100% error detection rate. By using the solutions, substantial energy savings, i.e., up to 40.3%,without compromising the accuracy of the model was achieved

History

Email Address of Submitting Author

mimsaad.1990@gmail.com

Submitting Author's Institution

Oulu

Submitting Author's Country

Finland