loading page

On the Security of Privacy-Preserving Federated Learning via Functional Encryption
  • Fucai Luo ,
  • Haiyan Wang
Fucai Luo
Author Profile
Haiyan Wang
Peng Cheng Laboratory

Corresponding Author:[email protected]

Author Profile

Abstract

Federated learning (FL) is a promising collaborative machine learning (ML) framework allowing models to be trained on sensitive real-world data while preserving its privacy.  Unfortunately, recent attacks on FL demonstrate that local gradients provided by participating users may leak information about their local training data.  To address this privacy issue, the notion of secure aggregation was proposed and has been achieved by different approaches including secure multiparty computation (MPC),  homomorphic encryption (HE), functional encryption (FE), and double-masking.
In 2023, Chang et al. proposed a novel privacy-preserving federated learning (PPFL) based on FE and claimed that their PPFL resolves the aforementioned privacy issue.  However, in this paper, we demonstrate that their PPFL is vulnerable to two attacks we design, and therefore their PPFL is insecure due to the above privacy issue.  We hope that by identifying the security problem, similar errors can be avoided in future designs of PPFL.