Abstract
Emerging Cyber threats with an increased dependency on vulnerable
cyber-networks have jeopardized all stakeholders, making Intrusion
Detection Systems (IDS) the essential network security requirement.
Several IDS have been proposed in the past decade for preventing systems
from cyber-attacks. Machine learning (ML) based IDS have shown
remarkable performance on conventional cyber threats. However, the
introduction of adversarial attacks in the cyber domain highlights the
need to upgrade these IDS because conventional ML-based approaches are
vulnerable to adversarial attacks. Therefore, the proposed IDS framework
leverages the performance of conventional ML-based IDS and integrates it
with Explainable AI (XAI) to deal with adversarial attacks. Global
Explanation of AI model, extracted by SHAP (Shapley additive
explanation) during the training phase of Primary Random Forest
Classifier (RFC), is used to reassess the credibility of predicted
outcomes. In other words, an outcome with low credibility is reassessed
by secondary classifiers. This SHAP-based approach helps in filtering
out all disguised malicious network traffic and can also enhance user
trust by adding transparency to the decision-making process. Adversarial
robustness of the proposed IDS was assessed by Hop Skip Jump Attack and
CICIDS dataset, where IDS showed 98.5% and 100% accuracy,
respectively. Furthermore, the performance of the proposed IDS is
compared with conventional algorithms using recall, precision, accuracy,
and F1-score as evaluation metrics. This comparative analysis and series
of experiments endorse the credibility of the proposed scheme, depicting
that the integration of XAI with conventional IDS can ensure
credibility, integrity, and availability of cyber-networks.