loading page

A Review of Evaluation Approaches for Explainable AI With Applications in Cardiology
  • +6
  • Ahmed Salih ,
  • Ilaria Boscolo Galazzo ,
  • Polyxeni Gkontra ,
  • Elisa Rauseo ,
  • Aaron Mark Lee ,
  • Karim Lekadir ,
  • Petia Radeva ,
  • Steffen Petersen ,
  • Gloria Menegaz
Ahmed Salih
University of Leicester

Corresponding Author:[email protected]

Author Profile
Ilaria Boscolo Galazzo
Author Profile
Polyxeni Gkontra
Author Profile
Elisa Rauseo
Author Profile
Aaron Mark Lee
Author Profile
Karim Lekadir
Author Profile
Petia Radeva
Author Profile
Steffen Petersen
Author Profile
Gloria Menegaz
Author Profile

Abstract

Explainable artificial intelligence (XAI) elucidates the decision-making process of complex AI models and is important in building trust in model predictions. XAI explanations themselves require evaluation as to accuracy and reasonableness and in the context of use of the underlying AI model. This review details the evaluation of XAI in cardiac AI applications and has found that, of the studies examined, 37% evaluated XAI quality using literature results, 11% used clinicians as domain-experts, 11% used proxies or statistical analysis, with the remaining 43% not assessing the XAI used at all. We aim to inspire additional studies within healthcare, urging researchers not only to apply XAI methods but to systematically assess the resulting explanations, as a step towards developing trustworthy and safe models.