loading page

A Systematic View of Leakage Risks in Deep Neural Network Systems
  • +7
  • Xing Hu ,
  • Ling Liang ,
  • chen xiaobing ,
  • Lei Deng ,
  • Yu Ji ,
  • Yufei Ding ,
  • Zidong Du ,
  • Qi Guo ,
  • Timothy Sherwood ,
  • Yuan Xie
Xing Hu
State Key Lab-oratory of Computer Architecture

Corresponding Author:[email protected]

Author Profile
Ling Liang
Author Profile
chen xiaobing
Author Profile
Yufei Ding
Author Profile
Zidong Du
Author Profile
Timothy Sherwood
Author Profile

Abstract

As deep neural networks (DNNs) continue their reach into a wide range of application domains, the neural network architecture of DNN models becomes an increasingly sensitive subject, due to either intellectual property protection or risks of adversarial attacks. In observing the large gap between the architectural surfaces exploration and the model integrity study, this paper first presents the formulated schema of the model leakage risks. Then, we propose DeepSniffer, a learning-based model extraction framework, to obtain the complete model architecture information without any prior knowledge of the victim model. It is robust to architectural and system noises introduced by the complex memory hierarchy and diverse run-time system optimizations. Taking GPU platforms as a showcase, DeepSniffer performs model extraction by learning both the architecture-level execution features of kernels and the inter-layer temporal association information introduced by the common practice of DNN design. We demonstrate that DeepSniffer works experimentally in the context of an off-the-shelf Nvidia GPU platform running a variety of DNN models. The extracted models are directly helpful to the attempting of crafting adversarial inputs. The DeepSniffer project has been released in https://github.com/xinghu7788/DeepSniffer.