loading page

TravellingFL: Communication Efficient Peer-to-Peer Federated Learning
  • +2
  • Vansh Gupta ,
  • Alka Luqman ,
  • Nandish Chattopadhyay ,
  • Anupam Chattopadhyay ,
  • Dusit Niyato
Vansh Gupta
Author Profile
Alka Luqman
Author Profile
Nandish Chattopadhyay
Author Profile
Anupam Chattopadhyay
Author Profile
Dusit Niyato
Author Profile

Abstract

Machine learning and artificial intelligence are two key emerging technologies in computer science that require vast amounts of data for a meaningful application. The requirement of such a large dataset is usually met by pooling data from various sources, which is often difficult to implement in practice due to strict data privacy constraints. Peer-to-Peer federated learning is a distributed machine learning paradigm with a primary goal of learning a well-performing global model by collaboratively learning a shared model at different data hubs without the need of sharing data. Due to its immense practical applications, there is a growing attention towards various challenges of efficient fed- erated learning including communication efficiency, assumptions on connectivity, data heterogeneity, enhanced privacy, etc. In this paper, we address the communication efficiency of Peer-to-Peer federated learning, modeling it using a graph theoretical frame- work. We show that one can draw from a range of graph-based algorithms to construct an efficient communication algorithm on a connected network, thereby matching the inference efficiency of centralized federated learning as well as that of a consolidated dataset. We conduct experiments with varied graph formations and sizes to validate our claims.