loading page

Cost-Efficient Feature Selection for Horizontal Federated Learning
  • +1
  • Sourasekhar Banerjee,
  • Devvjiit Bhuyan,
  • Erik Elmroth,
  • Monowar Bhuyan
Sourasekhar Banerjee

Corresponding Author:[email protected]

Author Profile
Devvjiit Bhuyan
Erik Elmroth
Monowar Bhuyan


Horizontal Federated Learning exhibits substantial similarities in feature space across distinct clients. However, not all features contribute significantly to the training of the global model. Moreover, the curse of dimensionality delays the training. Therefore, reducing irrelevant and redundant features from the feature space makes training faster and inexpensive. This work aims to identify the common feature subset from the clients in federated settings. Banerjee et al. introduced Fed-FiS 1 , and here we propose a hybrid approach known as Fed-MOFS, where Mutual Information and Clustering are used to select local features from each client. In both approaches, the selection of local features is similar, but Fed-FiS uses a scoring function to evaluate the global ranking of each feature, while Fed-MOFS exploits multi-objective optimization to rank the features based on their higher relevance and lower redundancy criteria. We select the feature subset based on the global ranks for learning. Empirically, we evaluated the performance, stability, and efficacy of Fed-FiS and Fed-MOFS on 12 datasets. We compared Fed-FiS and Fed-MOFS with conventional techniques such as ANOVA and RFE and a federated feature selection method called FSHFL. The experimental results demonstrate both Fed-FiS and Fed-MOFS improve the performance of the global model even after 50% reduction in the feature space size. Both Fed-FiS and Fed-MOFS are at least 2× faster than FSHFL. Also we verified the effect of feature selection on the convergence of the global model. The computational complexity of Fed-FiS and Fed-MOFS is O(d2) and O(2d2), respectively, which is lower than state-of-the-art.
12 Feb 2024Submitted to TechRxiv
14 Feb 2024Published in TechRxiv