loading page

Mitigating Targeted Universal Adversarial Attacks on Time Series Power Quality Disturbances Models
  • +1
  • Sultan Uddin Khan ,
  • Mohammed Mynuddin ,
  • Isaac Adom ,
  • Mahmoud Nabil
Sultan Uddin Khan
North Carolina A&T State University

Corresponding Author:[email protected]

Author Profile
Mohammed Mynuddin
Author Profile
Isaac Adom
Author Profile
Mahmoud Nabil
Author Profile


The utilization of deep learning models has been widely recognized for its significant contribution to the enhancement of smart grid operations, particularly in the domain of power quality disturbance (PQD) classification. Nevertheless, the emergence of vulnerabilities like targeted universal adversarial attacks can significantly undermine the reliability and security of deep learning models. These attacks can exploit the model’s weaknesses, causing it to misclassify PQDs with potentially catastrophic consequences. In our previous research, we for the first time examined the vulnerability of deep learning models to targeted universal adversarial attacks on time series data in smart grids by introducing a novel algorithm that effectively attacks by maintaining a trade-off between fooling rate and imperceptibility. While this attack method demonstrated notable efficacy, it also emphasized the pressing need for robust defensive mechanisms to safeguard these critical systems. This paper provides a thorough examination and evaluation of different defense strategies, specifically adversarial training, defensive distillation, and feature squeezing, in order to identify the most effective method for mitigating targeted universal adversarial (TUA) attacks on time series data for three different types of imperceptibility (high, medium and low). Based on our analysis, adversarial training demonstrates a significant reduction in the success rate of attacks. Specifically, the technique reduced fooling rates by an average of 23.73% for high imperceptibility, 31.04% for medium imperceptibility, and a substantial 42.96% for low imperceptibility. These findings highlight the crucial role of adversarial training in enhancing the integrity of deep learning applications.