Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection

Madeleine Schneider, David Aspinall, Nathaniel D. Bastian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.
Original languageEnglish
Title of host publication2021 IEEE International Conference on Big Data (Big Data)
EditorsYixin Chen, Heiko Ludwig, Yicheng Tu, Usama Fayyad, Xingquan Zhu, Xiaohua Hu, Suren Byna, Xiong Liu, Jianping Zhang, Shirui Pan, Vagelis Papalexakis, Jianwu Wang, Alfredo Cuzzocrea, Carlos Ordonez
Number of pages10
ISBN (Electronic)978-1-6654-3902-2
ISBN (Print)978-1-6654-4599-3
Publication statusPublished - 13 Jan 2022
Event2021 IEEE International Conference on Big Data - Online Conference
Duration: 15 Dec 202118 Dec 2021


Conference2021 IEEE International Conference on Big Data
Abbreviated titleBigData 2021
Internet address

Keywords / Materials (for Non-textual outputs)

  • machine learning
  • network security
  • adversarial artificial intelligence
  • model robustness evaluation


Dive into the research topics of 'Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection'. Together they form a unique fingerprint.

Cite this