Security, Cloud and Edge Lab

Andrea Venturi

Ph. D. Student

Department of Engineering "Enzo Ferrari"
University of Modena and Reggio Emilia
Via Vivarelli, 10
41125 - Modena, Italy
Tel.: +39 0592056273
E-mail: andrea.venturi[AT]
GPG Key: 0x17D89A50A78EB93F
Curriculum Vitae: Italian (Jan 2022), English (Jan 2022)


  • Machine and Deep Learning methods for Cybersecurity
  • Adversarial attacks against Machine Learning
  • Big data analytics for Cybersecurity
  • Network Intrusion Detection


Venturi, A.; Apruzzese, G.; Andreolini, M.; Colajanni, M.; Marchetti, M.

We present the first dataset that aims to serve as a benchmark to validate the resilience of botnet detectors against adversarial attacks. This dataset includes realistic adversarial samples that are generated by leveraging two widely used Deep Reinforcement Learning (DRL) techniques. These adversarial samples are proved to evade state of the art detectors based on Machine- and Deep-Learning algorithms. The initial corpus of malicious samples consists of network flows belonging to different botnet families presented in three public datasets containing real enterprise network traffic. We use these datasets to devise detectors capable of achieving state-of-the-art performance. We then train two DRL agents, based on Double Deep Q-Network and Deep Sarsa, to generate realistic adversarial samples: the goal is achieving misclassifications by performing small modifications to the initial malicious samples. These alterations involve the features that can be more realistically altered by an expert attacker, and do not compromise the underlying malicious logic of the original samples. Our dataset represents an important contribution to the cybersecurity research community as it is the first including thousands of automatically generated adversarial samples that are able to thwart state of the art classifiers with a high evasion rate. The adversarial samples are grouped by malware variant and provided in a CSV file format. Researchers can validate their defensive proposals by testing their detectors against the adversarial samples of the proposed dataset. Moreover, the analysis of these samples can pave the way to a deeper comprehension of adversarial attacks and to some sort of explainability of machine learning defensive algorithms. They can also support the definition of novel effective defensive techniques.

Year: 2021 | Pages: 106631 - 106639

ISSN: 2352-3409 | DOI: 10.1016/j.dib.2020.106631

Apruzzese, G.; Andreolini, M.; Marchetti, M.; Venturi, A.; Colajanni, M.

As cybersecurity detectors increasingly rely on machine learning mechanisms, attacks to these defenses escalate as well. Supervised classifiers are prone to adversarial evasion, and existing countermeasures suffer from many limitations. Most solutions degrade performance in the absence of adversarial perturbations; they are unable to face novel attack variants; they are applicable only to specific machine learning algorithms. We propose the first framework that can protect botnet detectors from adversarial attacks through deep reinforcement learning mechanisms. It automatically generates realistic attack samples that can evade detection, and it uses these samples to produce an augmented training set for producing hardened detectors. In such a way, we obtain more resilient detectors that can work even against unforeseen evasion attacks with the great merit of not penalizing their performance in the absence of specific attacks. We validate our proposal through an extensive experimental campaign that considers multiple machine learning algorithms and public datasets. The results highlight the improvements of the proposed solution over the state-of-the-art. Our method paves the way to novel and more robust cybersecurity detectors based on machine learning applied to network traffic analytics.

Year: 2020 | Pages: 1975 - 1987

ISSN: 1932-4537 | DOI: 10.1109/TNSM.2020.3031843

Venturi, A.; Ferrari, M.; Marchetti, M.; Colajanni, M.
38th Annual ACM Symposium on Applied Computing, SAC 2023

Machine Learning (ML) algorithms are largely adopted in modern Network Intrusion Detection Systems (NIDS). The most recent researches propose the use of Graph Neural Networks (GNN) to improve the detection performance. Instead of analyzing each network flow independently, these novel algorithms operate over a graph representation of the data that can take into account the network topology. This paper presents a novel NIDS based on the Adversarially Regularized Graph Autoencoder (ARGA) algorithm. Unlike existing proposals, ARGA offers several advantages as it encodes both the topological information of the graph and the node features in a compact latent representation through an un-supervised autoencoder. Moreover, it derives robust embedding through an additional regularization phase based on adversarial training. We consider also two ARGA variants, namely ARVGA for variational autoencoder and ARVGA-AX for content information reconstruction. A large experimental campaign using two public datasets demonstrates that our proposals are able to outperform other state-of-the-art GNN-based algorithms that already provide good results for network intrusion detection.

Year: 2023 | Pages: 1540 - 1548

ISBN: 9781450395175 | DOI: 10.1145/3555776.3577651

Venturi, A.; Zanasi, C.; Marchetti, M.; Colajanni, M.
21st IEEE International Symposium on Network Computing and Applications, NCA 2022

The rise of sequential Machine Learning (ML) methods has paved the way for a new generation of Network Intrusion Detection Systems (NIDS) which base their classification on the temporal patterns exhibited by malicious traffic. Previous work presents successful algorithms in this field, but just a few attempts try to assess their robustness in real-world contexts. In this paper, we aim to fill this gap by presenting a novel evaluation methodology. In particular, we propose a new time-based adversarial attack in which we simulate a delay in the malicious communications that changes the arrangement of the samples in the test set. Moreover, we design an innovative evaluation technique simulating a worst-case training scenario in which the last portion of the training set does not include any malicious flow. Through them, we can evaluate how much sequential ML-based NIDS are sensible to modifications that an adaptive attacker might apply at temporal level, and we can verify their robustness to the unpredictable traffic produced by modern networks. Our experimental campaign validates our proposal against a recent NIDS trained on a public dataset for botnet detection. The results demonstrate its high resistance to temporal adversarial attacks, but also a drastic performance drop when even just 1% of benign flows are injected at the end of the training set. Our findings raise questions about the reliable deployment of sequential ML-NIDS in practice, and at the same time can guide researchers to develop more robust defensive tools in the future.

Year: 2022 | Pages: 235 - 242

ISBN: 979-8-3503-9730-7 | DOI: 10.1109/NCA57778.2022.10013643

Venturi, A.; Stabili, D.; Pollicino, F.; Bianchi, E.; Marchetti, M.
21st IEEE International Symposium on Network Computing and Applications, NCA 2022

This paper presents a comparative analysis of different Machine Learning-based detection algorithms designed for Controller Area Network (CAN) communication on three different datasets. This work focuses on addressing the current limitations of related scientific literature, related to the quality of the publicly available datasets and to the lack of public implementations of the detection solutions presented in literature. Since these issues are preventing the reproducibility of published results and their comparison with novel detection solutions, we remark that it is necessary that all security researchers working in this field start to address them properly to advance the current state-of-the-art in CAN intrusion detection systems. This paper strives to solve these issues by presenting a comparison of existing works on publicly available datasets.

Year: 2022 | Pages: 81 - 88

ISBN: 979-8-3503-9730-7 | DOI: 10.1109/NCA57778.2022.10013527

Venturi, Andrea; Zanasi, Claudio
2021 IEEE 20th International Symposium on Network Computing and Applications (NCA)

Nowadays, Machine Learning (ML) solutions are widely adopted in modern malware and network intrusion detection systems. While these algorithms offer great performance, several researches demonstrate their vulnerability to adversarial attacks, which slightly modifies the input samples to compromise the correct behavior of the detector. Although this issue acquires extreme relevance in security-related contexts, the defenses are still immature. On the positive hand, cybersecurity poses additional challenges to the practicability of these attacks with respect to other domains. Previous studies focus exclusively on the degree of effectiveness of the proposals, but they do not discuss their actual feasibility. Based on this insight, in this paper we provide an overview of adversarial attacks and countermeasures for ML-based malware and network intrusion detection systems to assess their applicability in real world scenarios. In particular, we identify the constraints that need to be considered in the cybersecurity domain and discuss limitations of meaningful examples of previous proposals. Our work can guide practitioners to devise novel hardening solutions against more realistic threat models.

Year: 2021 | Pages: n/a - n/a

DOI: 10.1109/NCA53618.2021.9685709