Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection

Madeleine Schneider, David Aspinall, Nathaniel D. Bastian

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.
Original languageEnglish
Title of host publication2021 IEEE International Conference on Big Data (Big Data)
EditorsYixin Chen, Heiko Ludwig, Yicheng Tu, Usama Fayyad, Xingquan Zhu, Xiaohua Hu, Suren Byna, Xiong Liu, Jianping Zhang, Shirui Pan, Vagelis Papalexakis, Jianwu Wang, Alfredo Cuzzocrea, Carlos Ordonez
PublisherInstitute of Electrical and Electronics Engineers
Pages3343-3352
Number of pages10
ISBN (Electronic)978-1-6654-3902-2
ISBN (Print)978-1-6654-4599-3
DOIs
Publication statusPublished - 13 Jan 2022
Event2021 IEEE International Conference on Big Data - Online Conference
Duration: 15 Dec 202118 Dec 2021
https://bigdataieee.org/BigData2021/index.html

Conference

Conference2021 IEEE International Conference on Big Data
Abbreviated titleBigData 2021
Period15/12/2118/12/21
Internet address

Keywords / Materials (for Non-textual outputs)

  • machine learning
  • network security
  • adversarial artificial intelligence
  • model robustness evaluation

Fingerprint

Dive into the research topics of 'Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection'. Together they form a unique fingerprint.

Cite this