🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️

Security, Edge and Cloud Lab

Dimitri Galli

Ph. D. Student
galli

Department of Engineering "Enzo Ferrari"
University of Modena and Reggio Emilia
Via Vivarelli, 10
41125 - Modena, Italy
Tel.: +39 0592056273
E-mail: dimitri.galli[AT]unimore.it

Publications

Conferences
Galli, D.; Venturi, A.; Marasco, I.; Marchetti, M.
2025 Joint National Conference on Cybersecurity, ITASEC and SERICS 2025
Abstract

Among Machine Learning (ML) models, Graph Neural Networks (GNN) have been shown to improve the performance of modern Network Intrusion Detection Systems (NIDS). However, their black-box nature poses a significant challenge to their practical deployment in the real world. In this context, researchers have developed eXplainable Artificial Intelligence (XAI) methods that reveal the inner workings of GNN models. Despite this, determining the most effective explainer is complex because different methods yield different explanations, and there are no standardized strategies. In this paper, we present an innovative approach for evaluating XAI methods in GNN-based NIDS. We evaluate explainers based on their capability to identify key graph components that an attacker can exploit to bypass detection. More accurate XAI algorithms can identify topological vulnerabilities, resulting in more effective attacks. We assess the effectiveness of different explainers by measuring the severity of structural attacks guided by the corresponding explanations. Our case study compares five XAI techniques on two publicly available datasets containing real-world network traffic. Results show that the explainer based on Integrated Gradients (IG) generates the most accurate explanations, allowing attackers to refine their strategies.

Year: 2025 | Pages: n/a - n/a

ISSN: 1613-0073 | DOI: n/a

Venturi, A.; Galli, D.; Stabili, D.; Marchetti, M.
8th Italian Conference on Cyber Security, ITASEC 2024
Abstract

Modern Network Intrusion Detection Systems (NIDS) involve Machine Learning (ML) algorithms to automate the detection process. Although this integration has significantly enhanced their efficiency, ML models have been found vulnerable to adversarial attacks, which alter the input data to fool the detectors into producing a misclassification. Among the proposed countermeasures, adversarial training appears to be the most promising technique; however, it demands a large number of adversarial samples, which typically have to be manually produced. We overcome this limitation by introducing a novel methodology that employs a Graph AutoEncoder (GAE) to generate synthetic traffic records automatically. By design, the generated samples exhibit alterations in the attributes compared to the original netflows, making them suitable for use as adversarial samples during the adversarial training procedure. By injecting the generated samples into the training set, we obtain hardened detectors with better resilience to adversarial attacks. Our experimental campaign based on a public dataset of real enterprise network traffic also demonstrates that the proposed method even improves the detection rates of the hardened detectors in non-adversarial settings.

Year: 2024 | Pages: n/a - n/a

ISSN: 1613-0073 | DOI: n/a