loading page

Basalt: Server-Client Joint Defense Mechanism for Byzantine-Robust Federated Learning
  • +2
  • Anxiao Song,
  • Haoshuo Li,
  • Tao Zhang,
  • Ke Cheng,
  • Yulong Shen
Anxiao Song

Corresponding Author:[email protected]

Author Profile
Haoshuo Li
Tao Zhang
Ke Cheng
Yulong Shen


Federated Learning, a distributed machine learning paradigm, is susceptible to Byzantine attacks since the attacker can manipulate clients' local data and models to compromise the performance of the global model. Recently, there have been a wealth of server-side defenses that mitigate the attacks by removing or limiting the impact of malicious models. Nevertheless, the attacker can easily circumvent these approaches that rely solely on a single server-side defense, stemming from the high dimensionality of local models and the variety of Byzantine attacks. Therefore, we propose Basalt, an efficient server-client defense mechanism against Byzantine attacks. To our knowledge, we are the first to devise joint defense on clients and the server to achieve robust federated learning. Specifically, on the client side, we design an efficient self-defense approach with model-level penalty loss that restricts local-benign divergence and decreases local-malicious correlation to prevent misclassification. Besides, on the server side, we present an efficient defense based on the manifold approximation and the maximum clique, further enhancing the capabilities to defend against malicious Byzantine attacks. We provide rigorous robustness guarantees by proving that the difference between the global model of Basalt and the optimal global model is bounded. Our extensive experiments demonstrate that Basalt outperforms existing stateof-the-art works. Especially, It achieves nearly 100% accuracy for detecting malicious clients in non-IID MNIST datasets under various Byzantine attacks. The implementation code is provided at https://github.com/NSS-01/Basalt-Federated-learning.git.
10 Mar 2024Submitted to TechRxiv
18 Mar 2024Published in TechRxiv