SSLJBA: Joint Backdoor Attack on Both Robustness and Fairness of Self-Supervised Learning
Self-supervised learning (SSL) aims to learn more general and rich representations from large amounts of unlabeled data. Despite the great success, SSL methods may suffer from the class bias, which inevitably leads to unfair decisions for some groups in the application. Moreover, SSL are vulnerable to backdoor attacks, through which malicious users can degrade the model predication performance. Recent studies have shown that the fairness of machine learning models is also vulnerable and can be the target of attackers. Nevertheless, attacks to the fairness of SSL have not been addressed, and jointly attacking the robustness and fairness of SSL are still unexplored. In this paper, we analyze the limitations of existing backdoor attacks, and propose the first joint backdoor attack (SSLJBA) approach targeting both the robustness and fairness of SSL.
Funding
Grant No. U22A2099
History
Email Address of Submitting Author
haofengrui@stu2021.jnu.edu.cnSubmitting Author's Institution
Jinan UniversitySubmitting Author's Country
- China