TechRxiv
A_Systematic_View_of_Leakage_Risks_in_Deep_Neural_Network_Systems.pdf (1.55 MB)
Download file

A Systematic View of Leakage Risks in Deep Neural Network Systems

Download (1.55 MB)
preprint
posted on 2021-03-10, 18:35 authored by Xing Hu, Ling Liang, chen xiaobingchen xiaobing, Lei Deng, Yu Ji, Yufei Ding, Zidong Du, Qi Guo, Timothy Sherwood, Yuan Xie
As deep neural networks (DNNs) continue their reach into a wide range of application domains, the neural network architecture of DNN models becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. In observing the large gap between the architectural surfaces exploration and the model integrity study, this paper first presents the formulated schema of the model leakage risks. Then, we propose DeepSniffer, a learning-based model extraction framework, to obtain the complete model architecture information without any prior knowledge of the victim model. It is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models. The extracted models are directly helpful to the attempting of crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer.

History

Email Address of Submitting Author

huxing@ict.ac.cn

Submitting Author's Institution

State Key Lab-oratory of Computer Architecture, Institute of Computing Tech-nology, Chinese Academy of Sciences

Submitting Author's Country

  • China

Usage metrics

    Licence

    Exports