loading page

Toward Accountable and Explainable Artificial Intelligence Part one: Theory and Examples
  • Masood Khan ,
  • Jordan Vice
Masood Khan
Curtin University

Corresponding Author:[email protected]

Author Profile
Jordan Vice
Author Profile

Abstract

After reviewing the current state of explainable Artificial Intelligence (XAI) capabilities in artificial Intelligence (AI) systems developed for critical domains like criminology, engineering, governance, health, law and psychology, this paper proposes a domain-independent Accountable explainable Artificial Intelligence (AXAI) capability framework. The proposed AXAI framework extends the XAI capability to let AI systems share their decisions and adequately explain the underlying reasoning processes. The idea is to help AI system developers overcome algorithmic biases and system limitations through incorporation of domain independent AXAI capabilities. Moreover, existing XAI methods would neither separate nor quantify measures of comprehensibility, accuracy and accountability so incorporating and assessing XAI capabilities remains difficult. Assessment of the AXAI capabilities of two AI systems in this paper demonstrates that the proposed AXAI framework facilitates separation and measurement of comprehensibility, predictive accuracy and accountability. The AXAI framework allows for the delineation of AI systems in a three-dimensional AXAI space. It measures comprehensibility as the readiness of a human to apply the acquired knowledge. The system accuracy is measured as the ratio of the test and training data, training data size and the observed number of false-positive inferences. Finally, the AXAI framework measures accountability in terms of the inspect ability of the input cues, the processed data and the output information, for addressing any legal and ethical issues.
2022Published in IEEE Access volume 10 on pages 99686-99701. 10.1109/ACCESS.2022.3207812