loading page

A Comparison of Global Sensitivity Analysis Methods for Explainable AI with an Application in Genomic Prediction
  • +3
  • Bas van Stein ,
  • Elena Raponi ,
  • Zahra Sadeghi ,
  • Niek Bouman ,
  • Roeland van Ham ,
  • Thomas Bäck
Bas van Stein
Leiden University

Corresponding Author:[email protected]

Author Profile
Elena Raponi
Author Profile
Zahra Sadeghi
Author Profile
Niek Bouman
Author Profile
Roeland van Ham
Author Profile
Thomas Bäck
Author Profile

Abstract

Explainable Artificial Intelligence (XAI) is an increasingly important field of research required to bring AI to the next level in real-world applications. Global sensitivity analysis methods play an important role in XAI, as they can provide an understanding of which (groups of) parameters have high influence in the predictions of machine learning models and the output of simulators and real-world processes. In this paper, we conduct a survey into global sensitivity methods in an XAI context and present both a qualitative and a quantitative analysis of these methods under different conditions. In addition to the overview and comparison, we propose an open source application, GSAreport, that allows you to easily generate extensive reports using a carefully selected set of global sensitivity analysis methods depending on the number of dimensions and samples, to gain a deep understanding of the role of each feature for a given model or data set. We finally present the methods discussed in a complex real-world application of genomic prediction and draw conclusions about when to use which GSA methods.
2022Published in IEEE Access volume 10 on pages 103364-103381. 10.1109/ACCESS.2022.3210175