TechRxiv
A_Trustworthy_View_on_XAI_Method_Evaluation__Computer_Revision.pdf (1.56 MB)
Download file

A Trustworthy View on XAI Method Evaluation

Download (1.56 MB)
preprint
posted on 2022-11-14, 02:45 authored by Ding LiDing Li, Yan Liu, Jun Huang, Zerui Wang

As the demand grows to develop end-user trust in AI models, practitioners start to build and configure customized XAI (Explainable Artificial Intelligence) methods. The challenge is the lack of systematic evaluation of the newly proposed XAI method. As a result, it limits the confidence of XAI explanation in practice. In this paper, we follow a process of XAI method development and define two metrics in terms of consistency and efficiency in guiding the evaluation of trustworthy explanations. We demonstrate the development of a new XAI method in feature interactions called Mean-Centroid Preddiff, which analyzes and explains the feature importance order by a clustering algorithm. Following the process, we perform cross-validation on Mean-Centroid Preddiff with existing XAI methods. They show comparable consistency and gain in computation efficiency. The practice helps to adopt the core activities in the trustworthy evaluation of a new XAI method with rigorous cross-validation on consistency and efficiency.

History

Email Address of Submitting Author

ding.li@mail.concordia.ca

ORCID of Submitting Author

0000-0001-5311-953X

Submitting Author's Institution

Concordia University

Submitting Author's Country

  • Canada

Usage metrics

    Licence

    Exports