Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI

Research output: Contribution to journalArticlepeer-review

42 Scopus citations

Abstract

With the tremendous increase in networking devices connected to the Internet, network security is recognized as an important issue. Intrusion detection systems (IDSs) are one of the important components of network security. There are several methods for implementing an IDS, and one is machine learning. The machine learning performance of IDSs is evolving to a very large extent and is being used in real IDSs. However, recent studies showed that machine learning classification models are vulnerable to adversarial attacks. In this paper, we propose an adversarial attack detection framework in machine learning-based explainable AI intrusion detection systems. The proposed framework consists of two phases: initialization and detection. In the initialization phase, we train an IDS based on a support vector machine classification model and extract explanations of the Normal data records from the dataset using LIME (local interpretable model-agnostic explanations). Based on the resulting explanations, results of the classification by the trained IDS are analyzed during the detection phase by explanation to detect an adversarial attack.

Original languageEnglish
Article number35
JournalHuman-centric Computing and Information Sciences
Volume11
DOIs
StatePublished - 2021

Keywords

  • Adversarial attacks
  • Explainable ai
  • Intrusion detection systems
  • Machine learning

Fingerprint

Dive into the research topics of 'Detection of Adversarial Attacks in AI-Based Intrusion Detection Systems Using Explainable AI'. Together they form a unique fingerprint.

Cite this