TechRxiv
FedCache_0831.pdf (3.34 MB)
Download file

FedCache: A Knowledge Cache-driven Federated Learning Architecture for Personalized Edge Intelligence

Download (3.34 MB)
preprint
posted on 2023-09-01, 19:06 authored by Zhiyuan WuZhiyuan Wu, Sheng Sun, Yuwei Wang, Min Liu, Ke Xu, Wen Wang, Xuefeng Jiang, Bo Gao, Jinda Lu

Edge Intelligence (EI) enables Artificial Intelligence (AI) applications to run at the edge, where data analysis and decision-making can be performed in real-time and close to data sources.

To protect data privacy and unify data silos distributed among end devices in EI, Federated Learning (FL) is proposed for collaborative training shared AI models across multiple devices without compromising data security.

However, the prevailing FL approaches cannot guarantee model generalization and adaptation on heterogeneous clients.

Recently, Personalized Federated Learning (PFL) has drawn growing awareness in EI, as it enables striking a productive balance between local-specific training requirements inherent in devices and global-generalized optimization objectives for satisfactory performance.

However, most existing PFL methods are based on the Parameters Interaction-based Architecture (PIA) represented by FedAvg, which causes unaffordable communication burdens due to large-scale parameters transmission between devices and the edge server.

In contrast, Logits Interaction-based Architecture (LIA) enables to update model parameters with logits transfer, and gains the advantages of communication lightweight and heterogeneous on-device model allowance compared to PIA. Nevertheless, previous LIA methods attempt to achieve satisfactory performance either relying on unrealistic public datasets or increasing communication overhead for additional information transmission other than logits.

To tackle this dilemma, we propose a knowledge cache-driven PFL architecture, named FedCache, which reserves a knowledge cache on the server for fetching personalized knowledge from the samples with similar hashes to each given on-device sample.

During the training phase, ensemble distillation is applied to on-device models for constructive optimization with personalized knowledge transferred from the server-side knowledge cache.

Empirical experiments on four datasets demonstrate the comparable performance of FedCache with state-of-art PFL approaches, with more than two orders of magnitude improvements in communication efficiency. Our code and DEMO are available at https://github.com/wuzhiyuan2000/FedCache.

Funding

This work was supported by the National Key Research and Development Program of China (2021YFB2900102) and the National Natural Science Foundation of China (62072436).

History

Email Address of Submitting Author

wuzhiyuan22s@ict.ac.cn

ORCID of Submitting Author

0000-0002-8925-4896

Submitting Author's Institution

Institute of Computing Technology, Chinese Academy of Sciences

Submitting Author's Country

  • China