loading page

Persistence of Backdoor-based Watermarks for Neural Networks: A Comprehensive Evaluation
  • +1
  • Anh Tu Ngo,
  • Chuan Song Heng,
  • Nandish Chattopadhyay,
  • Anupam Chattopadhyay
Anh Tu Ngo

Corresponding Author:[email protected]

Author Profile
Chuan Song Heng
Nandish Chattopadhyay
Anupam Chattopadhyay

Abstract

Deep Neural Networks (DNNs) have gained considerable traction in recent years due to the unparalleled results they gathered. However, the cost behind training such sophisticated models is resource intensive, resulting in many to consider DNNs to be intellectual property (IP) to model owners. In this era of cloud computing, high-performance DNNs are often deployed all over the internet so that people can access them publicly. As such, DNN watermarking schemes, especially backdoor-based watermarks, have been actively developed in recent years to preserve proprietary rights. (Add that no backdoor watermark guarantee). Nonetheless, there lies much uncertainty on the robustness of existing backdoor watermark schemes, towards both adversarial attacks and unintended means such as fine-tuning neural network models. In this paper, we extensively evaluate the persistence of recent backdoor-based watermarks within neural networks in the scenario of fine-tuning, we propose/develop a novel data-driven idea to restore watermark after fine-tuning without exposing the trigger set. Our empirical results show that by solely introducing training data after fine-tuning, the watermark can be restored if model parameters do not shift dramatically during fine-tuning. Depending on the types of trigger samples used, trigger accuracy can be reinstated to up to 100%. Our study further explores how the restoration process works using loss landscape visualization, as well as the idea of introducing training data in fine-tuning stage to alleviate watermark vanishing.
23 Apr 2024Submitted to TechRxiv
29 Apr 2024Published in TechRxiv