loading page

SSLJBA: Joint Backdoor Attack on Both Robustness and Fairness of Self-Supervised Learning
  • +1
  • Fengrui Hao ,
  • Tianlong Gu ,
  • Jionghui Jiang ,
  • Ming Liu
Fengrui Hao
Jinan University

Corresponding Author:[email protected]

Author Profile
Tianlong Gu
Author Profile
Jionghui Jiang
Author Profile


Self-supervised learning (SSL) aims to learn more general and rich representations from large amounts of unlabeled data. Despite the great success, SSL methods may suffer from the class bias, which inevitably leads to unfair decisions for some groups in the application. Moreover, SSL are vulnerable to backdoor attacks, through which malicious users can degrade the model predication performance. Recent studies have shown that the fairness of machine learning models is also vulnerable and can be the target of attackers. Nevertheless, attacks to the fairness of SSL have not been addressed, and jointly attacking the robustness and fairness of SSL are still unexplored. In this paper, we analyze the limitations of existing backdoor attacks, and propose the first joint backdoor attack (SSLJBA) approach targeting both the robustness and fairness of SSL.