TechRxiv
IEEE_TAI.pdf (7.88 MB)
Download file

Attacking Distance-aware Attack: Semi-targeted Model Poisoning on Federated Learning

Download (7.88 MB)
preprint
posted on 2023-04-10, 18:00 authored by Yuwei SunYuwei Sun, Hideya Ochiai, Jun Sakuma

Existing model poisoning attacks on federated learning (FL) assume that an adversary has access to the full data distribution. In reality, an adversary usually has limited prior knowledge about clients' data distributions. In such a case, a poorly chosen target class renders an attack less effective. In particular, we considered a semi-targeted situation where the source class is predetermined but the target class is not. The goal is to cause the global classifier to misclassify data of the source class. Approaches such as label flipping have been adopted to inject poisoned parameters into FL. Nevertheless, it has shown that their performances are usually class-sensitive, varying with different target classes applied. Typically, an attack can become less effective when shifting to a different target class. To overcome this challenge, we propose the Attacking Distance-aware Attack (ADA) that enhances a poisoning attack by finding the optimized target class in the feature space. ADA deduces pair-wise distances between classes in the latent feature space using the Fast LAyer gradient MEthod (FLAME). We performed extensive evaluations, varying the attacking frequency in five benchmark image classification tasks with three model architectures. Furthermore, ADA's efficacy was studied under different defense strategies in FL. ADA succeeded in increasing attack performance to 2.8 times in the most challenging case with an attacking frequency of 0.01 and bypassed existing defenses, where differential privacy that was the most effective defense still could not reduce the attack performance to below 50%.

History

Email Address of Submitting Author

ywsun@g.ecc.u-tokyo.ac.jp

Submitting Author's Institution

The University of Tokyo

Submitting Author's Country

  • Japan