loading page

Agglomerative Federated Learning:Empowering Larger Model Training via End-Edge-Cloud Collaboration
  • +5
  • Zhiyuan Wu ,
  • Sheng Sun ,
  • Yuwei Wang ,
  • Min Liu ,
  • Bo Gao ,
  • Quyang Pan ,
  • Tianliu He ,
  • Xuefeng Jiang
Zhiyuan Wu
Institute of Computing Technology

Corresponding Author:[email protected]

Author Profile
Sheng Sun
Author Profile
Yuwei Wang
Author Profile
Quyang Pan
Author Profile
Tianliu He
Author Profile
Xuefeng Jiang
Author Profile

Abstract

Federated Learning (FL) enables training Artificial Intelligence (AI) models over end devices without compromising their privacy. As computing tasks are increasingly performed by a combination of cloud, edge, and end devices, FL can benefit from this End-Edge-Cloud Collaboration (EECC) paradigm to achieve collaborative device-scale expansion with real-time access. Although Hierarchical Federated Learning (HFL) supports multi-tier model aggregation suitable for EECC, prior works assume the same model structure on all computing nodes, constraining the model scale by the weakest end devices. To address this issue, we propose Agglomerative Federated Learning (FedAgg), which is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability. FedAgg recursively organizes computing nodes among all tiers based on Bridge Sample Based Online Distillation Protocol (BSBODP), which enables every pair of parent-child computing nodes to mutually transfer and distill knowledge extracted from generated bridge samples. This design enhances the performance by exploiting the potential of larger models, with privacy constraints of FL and flexibility requirements of EECC both satisfied. Experiments under various settings demonstrate that FedAgg outperforms state-of-the-art methods by an average of 4.53% accuracy gains and remarkable improvements in convergence rate. Our code is available at https://github.com/wuzhiyuan2000/FedAgg.