🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️

Security, Edge and Cloud Lab

Dimitri Galli

Ph. D. Student
galli

Department of Engineering "Enzo Ferrari"
University of Modena and Reggio Emilia
Via Vivarelli, 10
41125 - Modena, Italy
Tel.: +39 0592056273
E-mail: dimitri.galli[AT]unimore.it

Publications

Galli, D.; Venturi, A.; Stabili, D.; Andreolini, M.; Marchetti, M.

Conferences
23rd IEEE International Symposium on Network Computing and Applications, NCA 2025
Abstract

Graph Neural Networks (GNNs) represent a promising solution for Machine Learning (ML) based Network Intrusion Detection Systems (NIDS), thanks to their ability to leverage both network flow features and topological patterns. While GNN classifiers demonstrate superior robustness against feature-based adversarial attacks compared to other ML detectors, they remain vulnerable to structural adversarial attacks, where an attacker perturbs the underlying network graph topology by injecting edges or inserting nodes. Such attacks pose a realistic and severe threat, undermining the reliability of GNN-based NIDS in practical deployments. While countermeasures have been proposed in the literature, they often rely on assumptions that are unrealistic in real-world cybersecurity scenarios. In this paper, we propose a defense framework based on adversarial training to strengthen GNN-based NIDS against structural attacks. We generate adversarial samples by strategically replacing the source and destination nodes in benign network flows, thereby efficiently mimicking edge injection attacks. We evaluate our approach on two widely used datasets (CTU-13 and TON-IoT) using E-GraphSAGE as the base GNN classifier. Experimental results show that our approach produces hardened detectors with superior detection performance on clean graphs and enhanced robustness against structural adversarial attacks.

Year: 2025 | Pages: 219 - 228

DOI: 10.1109/NCA67271.2025.00043

Galli, D.; Venturi, A.; Marasco, I.; Marchetti, M.

Conferences
2025 Joint National Conference on Cybersecurity, ITASEC and SERICS 2025
Abstract

Among Machine Learning (ML) models, Graph Neural Networks (GNN) have been shown to improve the performance of modern Network Intrusion Detection Systems (NIDS). However, their black-box nature poses a significant challenge to their practical deployment in the real world. In this context, researchers have developed eXplainable Artificial Intelligence (XAI) methods that reveal the inner workings of GNN models. Despite this, determining the most effective explainer is complex because different methods yield different explanations, and there are no standardized strategies. In this paper, we present an innovative approach for evaluating XAI methods in GNN-based NIDS. We evaluate explainers based on their capability to identify key graph components that an attacker can exploit to bypass detection. More accurate XAI algorithms can identify topological vulnerabilities, resulting in more effective attacks. We assess the effectiveness of different explainers by measuring the severity of structural attacks guided by the corresponding explanations. Our case study compares five XAI techniques on two publicly available datasets containing real-world network traffic. Results show that the explainer based on Integrated Gradients (IG) generates the most accurate explanations, allowing attackers to refine their strategies.

Year: 2025 | Pages: n/a - n/a

ISSN: 1613-0073 | DOI: n/a

Venturi, A.; Galli, D.; Stabili, D.; Marchetti, M.

Conferences
8th Italian Conference on Cyber Security, ITASEC 2024
Abstract

Modern Network Intrusion Detection Systems (NIDS) involve Machine Learning (ML) algorithms to automate the detection process. Although this integration has significantly enhanced their efficiency, ML models have been found vulnerable to adversarial attacks, which alter the input data to fool the detectors into producing a misclassification. Among the proposed countermeasures, adversarial training appears to be the most promising technique; however, it demands a large number of adversarial samples, which typically have to be manually produced. We overcome this limitation by introducing a novel methodology that employs a Graph AutoEncoder (GAE) to generate synthetic traffic records automatically. By design, the generated samples exhibit alterations in the attributes compared to the original netflows, making them suitable for use as adversarial samples during the adversarial training procedure. By injecting the generated samples into the training set, we obtain hardened detectors with better resilience to adversarial attacks. Our experimental campaign based on a public dataset of real enterprise network traffic also demonstrates that the proposed method even improves the detection rates of the hardened detectors in non-adversarial settings.

Year: 2024 | Pages: n/a - n/a

ISSN: 1613-0073 | DOI: n/a