loading page

A Trustworthy View on XAI Method Evaluation
  • +1
  • Ding Li ,
  • Yan Liu ,
  • Jun Huang ,
  • Zerui Wang
Ding Li
Concordia University, Concordia University

Corresponding Author:[email protected]

Author Profile
Jun Huang
Author Profile
Zerui Wang
Author Profile

Abstract

As the demand grows to develop end-user trust in AI models, practitioners start to build and configure customized XAI (Explainable Artificial Intelligence) methods. The challenge is the lack of systematic evaluation of the newly proposed XAI method. As a result, it limits the confidence of XAI explanation in practice. In this paper, we follow a process of XAI method development and define two metrics in terms of consistency and efficiency in guiding the evaluation of trustworthy explanations. We demonstrate the development of a new XAI method in feature interactions called Mean-Centroid Preddiff, which analyzes and explains the feature importance order by a clustering algorithm. Following the process, we perform cross-validation on Mean-Centroid Preddiff with existing XAI methods. They show comparable consistency and gain in computation efficiency. The practice helps to adopt the core activities in the trustworthy evaluation of a new XAI method with rigorous cross-validation on consistency and efficiency.