loading page

AaN: Anti-adversarial Noise - A Novel Approach for Securing Machine Learning-based Wireless Communication Systems
  • +2
  • Anis Amazigh Hamza ,
  • amira guesmi ,
  • Iyad Dayoub ,
  • Abderrahmane Amrouche ,
  • Ihsen Alouani
Anis Amazigh Hamza
Author Profile
amira guesmi
Author Profile
Iyad Dayoub
Author Profile
Abderrahmane Amrouche
Author Profile
Ihsen Alouani
Author Profile

Abstract

Machine Learning (ML) is becoming a cornerstone enabling technology for the next generation of wireless systems. This is mainly due to the high performance achieved by these data-driven models in addressing problems in communication that are challenging to solve using the classical methods. However, ML models are known to be vulnerable to adversarial attacks; maliciously crafted lowmagnitude signals that are designed to mislead ML models. More specifically, the propagation nature of the electromagnetic signals makes the wireless domain even more critical compared to other applications like computer vision where the attacker is physically constrained by the victim’s immediate neighborhood to be efficient. While several works showed the practicality of these attacks in the wireless domain, the main countermeasure is adversarial training. However, this approach results in a considerable accuracy loss, which makes the very utility of ML questionable. In this paper, we address this problem with a new approach tailored to wireless communication contexts. Specifically, we propose a new defense that leverages the physical properties of the wireless propagation to enhance ML-based wireless communication systems against adversarial attacks. We propose Anti-adversarial Noise (AaN), where the Base Station (BS) broadcasts a carefully crafted defensive signal that is designed to counter the impact of any adversarial noise. We specifically focus on ML-based modulation recognition. However, the proposed method is not specific to this application and can be generalized to other ML-based communication use cases. Our results show that our proposed defense can enhance models’ robustness by up to 44% without losing utility.