RingSFL: An Adaptive Split Federated Learning Towards Taming Client
Heterogeneity
Abstract
Federated learning (FL) has gained increasing attention due to its
ability to collaboratively train while protecting client data privacy.
However, vanilla FL cannot adapt to client heterogeneity, leading to a
degradation in training efficiency due to stragglers, and is still
vulnerable to privacy leakage. To address these issues, this paper
proposes RingSFL, a novel distributed learning scheme that integrates FL
with a model split mechanism to adapt to client heterogeneity while
maintaining data privacy. In RingSFL, all clients form a ring topology.
For each client, instead of training the model locally, the model is
split and trained among all clients along the ring through a pre-defined
direction. By properly setting the propagation lengths of heterogeneous
clients, the straggler effect is mitigated, and the training efficiency
of the system is significantly enhanced. Additionally, since the local
models is blended, it is less likely for an eavesdropper to obtain the
complete model and recover the raw data, thus improving data privacy.
The experimental results on both simulation and prototype systems show
that RingSFL can achieve better convergence performance than benchmark
methods on independently identically distributed (IID) and non-IID
datasets, while effectively preventing eavesdroppers from recovering
training data.