loading page

Towards Evaluating the Robustness of Deep Intrusion Detection Models in Adversarial Environment
  • +2
  • Sriram Srinivasan ,
  • Simran k ,
  • vinayakumar R ,
  • Akarsh Soman ,
  • Soman KP
Sriram Srinivasan
Amrita Vishwa Vidyapeetham

Corresponding Author:[email protected]

Author Profile
vinayakumar R
Author Profile
Akarsh Soman
Author Profile

Abstract

Network Intrusion Detection System (NIDS) is a method that is utilized to categorize network traffic as malicious or normal. Anomaly-based method and signature-based method are the traditional approaches used for network intrusion detection. The signature-based approach can only detect familiar attacks whereas the anomaly-based approach shows promising results in detecting new unknown attacks. Machine Learning (ML) based approaches have been studied in the past for anomaly-based NIDS. In recent years, the Deep Learning (DL) algorithms have been widely utilized for intrusion detection due to its capability to obtain optimal feature representation automatically. Even though DL based approaches improves the accuracy of the detection tremendously, they are prone to adversarial attacks. The attackers can trick the model to wrongly classify the adversarial samples into a particular target class. In this paper, the performance analysis of several ML and DL models are carried out for intrusion detection in both adversarial and non-adversarial environment. The models are trained on the NSLKDD dataset which contains a total of 148,517 data points. The robustness of several models against adversarial samples is studied.