Abstract
Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.
Original language | English |
---|---|
Title of host publication | 2021 IEEE International Conference on Big Data (Big Data) |
Editors | Yixin Chen, Heiko Ludwig, Yicheng Tu, Usama Fayyad, Xingquan Zhu, Xiaohua Hu, Suren Byna, Xiong Liu, Jianping Zhang, Shirui Pan, Vagelis Papalexakis, Jianwu Wang, Alfredo Cuzzocrea, Carlos Ordonez |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 3343-3352 |
Number of pages | 10 |
ISBN (Electronic) | 978-1-6654-3902-2 |
ISBN (Print) | 978-1-6654-4599-3 |
DOIs | |
Publication status | Published - 13 Jan 2022 |
Event | 2021 IEEE International Conference on Big Data - Online Conference Duration: 15 Dec 2021 → 18 Dec 2021 https://bigdataieee.org/BigData2021/index.html |
Conference
Conference | 2021 IEEE International Conference on Big Data |
---|---|
Abbreviated title | BigData 2021 |
Period | 15/12/21 → 18/12/21 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- machine learning
- network security
- adversarial artificial intelligence
- model robustness evaluation