Poisoning the Well: Adversarial Poisoning on ML-based Software-defined Network Intrusion Detection Systems
With the usage of machine learning (ML) algorithms in modern-day network intrusion detection systems (NIDS), contemporary network communications are efficiently protected from cyber threats. However, these ML algorithms are starting to be compromised by adversarial attacks that ambush the ML pipeline. In this paper, we demonstrate the feasibility of an adversarial attack called the Cosine Similarity Label Manipulation (CSLM), which is geared toward compromising training labels for ML-based NIDS, and how they can affect ML pipelines. We demonstrate the efficacy of the attacks towards both single and multi-controller software-defined network (SDN) setups. Results indicate that the proposed attacks provide substantial deterioration of classifier performance in single SDNs, specifically, those that utilize RF, which deteriorates ~50% under Min-CSLM attacks, and SVMs, which undergo ~60% deterioration from a Max-CSLM attack. We also note that RF, SVM, and MLP classifiers are also extensively vulnerable to these attacks in MSDNs as they incur the most observed utility deterioration. MLP-based uniform multi-controller setups (MSDN) incur the most deterioration under both proposed CSLM attacks with ~28% decrease in performance, while SVM and RF-based variable MSDNs incur the most deterioration under both CSLM attacks with ~30% and ~35% decrease in performance, respectively.
Email Address of Submitting Authortapadhird@nevada.unr.edu
Submitting Author's InstitutionUniversity of Nevada, Reno
Submitting Author's Country
- United States of America