loading page

MiFL: Multi-Input Neural Networks in Federated Learning
  • +1
  • Bruno Casella ,
  • Walter Riviera ,
  • Marco Aldinucci ,
  • Gloria Menegaz
Bruno Casella
Author Profile
Walter Riviera
Author Profile
Marco Aldinucci
Author Profile
Gloria Menegaz
Author Profile

Abstract

Driven by the Deep Learning (DL) revolution, Artificial   intelligence (AI) has become a fundamental tool for many Bio-Medical tasks, including AI-assisted diagnosis. These include analysing and classifying images (2D and 3D), where, for some tasks, DL exhibits superhuman performance. Diagnostic imaging, however, is not the only diagnostic tool. Tabular data, such as personal data, vital signs, and genomic/blood tests, are commonly collected for every patient entering a clinical institution. However, it is rarely considered in DL pipelines, although it carries diagnostic information. The training of DL models requires large datasets, so large that every institution might need more data that should be pooled from different sites. Data pooling generates newfound concerns about data access and movement across other institutions spawning multiple dimensions, such as performance, energy efficiency, privacy, criticality, and security. Federated Learning (FL) is a cooperative learning paradigm aiming at addressing these concerns by moving models instead of data across different institutions. This paper proposes a Federated multi-input model that leverages images and tabular data, providing a proof of concept of the feasibility of multi-input FL architectures. The proposed model was evaluated on two showcases: the prognosis of CoViD-19 disease and the patients’ stratification for Alzheimer’s disease. Results show that enabling multi-input architectures in the FL framework allows for improving the performance regarding both accuracy and generalizability with respect to non-federated models while ensuring security and data protection peculiar to FL.