🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️ 🌟 🎄️️ 🎅️️ ❄️️️ 🎁​️️ ☃️​️️ 🏂🏿​️️ 🐧​️️ ⛷️​️️ 🥶​️️ ❄️​️️ ❄️​️️ ❄️​️️

Security, Edge and Cloud Lab

Claudia Canali

Associate Professor
canali

Department of Engineering "Enzo Ferrari"
University of Modena and Reggio Emilia
Via Vivarelli, 10
41125 - Modena, Italy
Tel.: +39 0592056317
E-mail: claudia.canali[AT]unimore.it

Publications

Journals
Canali, C.; Gazzotti, C.; Lancellotti, R.; Schena, F.
ALGORITHMS
Abstract

In the last few years, fog computing has been recognized as a promising approach to support modern IoT applications based on microservices. The main characteristic of this application involve the presence of geographically distributed sensors or mobile end users acting as sources of data. Relying on a cloud computing approach may not represent the most suitable solution in these scenario due to the non-negligible latency between data sources and distant cloud data centers, which may represent an issue in cases involving real-time and latency-sensitive IoT applications. Placing certain tasks, such as preprocessing or data aggregation, in a layer of fog nodes close to sensors or end users may help to decrease the response time of IoT applications as well as the traffic towards the cloud data centers. However, the fog scenario is characterized by a much more complex and heterogeneous infrastructure compared to a cloud data center, where the computing nodes and the inter-node connecting are more homogeneous. As a consequence, the the problem of efficiently placing microservices over distributed fog nodes requires novel and efficient solutions. In this paper, we address this issue by proposing and comparing different heuristics for placing the application microservices over the nodes of a fog infrastructure. We test the performance of the proposed heuristics and their ability to minimize application response times and satisfy the Service Level Agreement across a wide set of operating conditions in order to understand which approach is performs the best depending on the IoT application scenario.

Year: 2023 | Pages: N/A - N/A

ISSN: 1999-4893 | DOI: 10.3390/a16090441

Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
COMPUTER NETWORKS
Abstract

The distributed nature of edge computing infrastructures requires a significant effort to avoid overload conditions due to uneven distribution of incoming load from sensors placed over a wide area. While optimisation algorithms operating offline can address this issue in the medium to long term, sudden and unexpected traffic surges require an online approach where load balancing actions are taken at a smaller time scale. However, when the service time of a single request becomes comparable with the latency needed to take and actuate load balancing decisions, the design of online approaches becomes particularly challenging. This paper focuses on the class of online algorithms for load balancing based on resource sharing among random nodes. While this randomisation principle is a straightforward and effective way to share resources and achieve load balance, it fails to work properly when the interval between decision making and decision actuating times (called schedule lag) becomes comparable with the time required to execute a job, a condition not rare in edge computing systems, and provokes stale (out-of-date) information to be involved in scheduling decisions. Our analysis combines (1) a theoretical model that evaluates how stale information reduces the effectiveness of the balancing mechanism and describes the correlation between the system state at decision making and decision actuating times; (2) a simulation approach to study a wide range of algorithm parameters and possible usage scenarios. The results of our analysis provides the designers of distributed edge systems with useful hints to decide, based on the scenario, which load balancing protocol is the most suitable.

Year: 2022 | Pages: 108935 - 108949

ISSN: 1389-1286 | DOI: 10.1016/j.comnet.2022.108935

Faenza, F.; Canali, C.; Colajanni, M.; Carbonaro, A.
EDUCATION SCIENCES
Abstract

In the last few years, several initiatives based on extracurricular activities have been organized in many countries around the world, with the aim to reduce the digital gender gap in STEM (Science, Technology, Engineering, Math) fields. Among them, the Digital Girls summer camp, organized every year since 2014 by two Italian universities with the aim to attract female students to ICT (Information and Communication Technologies) disciplines, represents quite a unique initiative for its characteristics of long-duration (3–4 entire weeks) and complete gratuitousness for the participants. The COVID-19 emergency imposed severe changes to such activities, that had to be modified and carried out in the online mode as a consequence of social distancing. However, on one hand, the general lack of high-quality evaluations of these initiatives hinders the possibility to understand the actual impact of extracurricular activities on the future academic choices of the participants. On the other hand, the availability of data collected over different editions of Digital Girls has allowed us to analyze the summer camp impact and to evaluate the pros and cons of in-presence and online activities. The main contribution of this paper is twofold. First, we present an overview of existing experiences, at the national (Italian) and international levels, to increase female participation in integrated STEM and ICT fields. Second, we analyze how summer camp participation can influence girls’ future academic choices, with specific attention to ICT-related disciplines. In particular, the collection of a significant amount of data through anonymous surveys conducted before and after the camp activities over the two editions allowed us to evidence the different impacts of in-presence and online extracurricular activities.

Year: 2021 | Pages: 1 - 15

ISSN: 2227-7102 | DOI: 10.3390/educsci11110715

Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
PERVASIVE AND MOBILE COMPUTING
Abstract

Smart cities represent an archetypal example of infrastructures where the fog computing paradigm can express its potential: we have a large set of sensors deployed over a large geographic area where data should be pre-processed (e.g., to extract relevant information or to filter and aggregate data) before sending the result to a collector that may be a cloud data center, where relevant data are further processed and stored. However, during its lifetime the infrastructure may change, e.g., due to the additional sensors or fog nodes deploy, while the load can grow, e.g., for additional services based on the collected data. Since nodes are typically deployed in multiple time stages, they may have different computation capacity due to technology improvements. In addition, an uneven distribution of the workload intensity can arise, e.g., due to hot spot for occasional public events or to rush hours and users’ behavior. In simple words, resources and load can vary over time and space. Under the resource management point of view, this scenario is clearly challenging. Due to the large scale and variable nature of the resources, classical centralized solutions should in fact be avoided, since they do not scale well and require to transfer all data from sensors to a central hub, distorting the very nature of in-situ data processing. In this paper, we address the problem of resources management by proposing two distributed load balancing algorithms, tailored to deal with heterogeneity. We evaluate the performance of such algorithms using both a simplified environment where we perform several sensitivity analysis with respect to the factors responsible for the infrastructure heterogeneity and exploiting a realistic scenario of a smart city. Furthermore, in our study we combine theoretical models and simulation. Our experiments demonstrate the effectiveness of the algorithms under a wide range of heterogeneity, overall providing a remarkable improvement compared to the case of not cooperating nodes.

Year: 2020 | Pages: 101221 - 101245

ISSN: 1574-1192 | DOI: 10.1016/j.pmcj.2020.101221

Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Abawajy, Jemal
IEEE TRANSACTIONS ON CLOUD COMPUTING
Abstract

A clear trend in the evolution of network-based services is the ever-increasing amount of multimedia data involved. This trend towards big-data multimedia processing finds its natural placement together with the adoption of the cloud computing paradigm, that seems the best solution to cope with the demands of a highly fluctuating workload that characterizes this type of services. However, as cloud data centers become more and more powerful, energy consumption becomes a major challenge both for environmental concerns and for economic reasons. An effective approach to improve energy efficiency in cloud data centers is to rely on traffic engineering techniques to dynamically adapt the number of active servers to the current workload. Towards this aim, we propose a joint computing-plus-communication optimization framework exploiting virtualization technologies, called MMGreen. Our proposal specifically addresses the typical scenario of multimedia data processing with computationally intensive tasks and exchange of a big volume of data. The proposed framework not only ensures users the Quality of Service (through Service Level Agreements), but also achieves maximum energy saving and attains green cloud computing goals in a fully distributed fashion by utilizing the DVFS-based CPU frequencies. To evaluate the actual effectiveness of the proposed framework, we conduct experiments with MMGreen under real-world and synthetic workload traces. The results of the experiments show that MMGreen may significantly reduce the energy cost for computing, communication and reconfiguration with respect to the previous resource provisioning strategies, respecting the SLA constraints.

Year: 2020 | Pages: 1162 - 1175

ISSN: 2168-7161 | DOI: 10.1109/TCC.2016.2617367

Canali, C.; Lancellotti, R.
ALGORITHMS
Abstract

Fog computing is becoming popular as a solution to support applications based on geographically distributed sensors that produce huge volumes of data to be processed and filtered with response time constraints. In this scenario, typical of a smart city environment, the traditional cloud paradigm with few powerful data centers located far away from the sources of data becomes inadequate. The fog computing paradigm, which provides a distributed infrastructure of nodes placed close to the data sources, represents a better solution to perform filtering, aggregation, and preprocessing of incoming data streams reducing the experienced latency and increasing the overall scalability. However, many issues still exist regarding the efficient management of a fog computing architecture, such as the distribution of data streams coming from sensors over the fog nodes to minimize the experienced latency. The contribution of this paper is two-fold. First, we present an optimization model for the problem of mapping data streams over fog nodes, considering not only the current load of the fog nodes, but also the communication latency between sensors and fog nodes. Second, to address the complexity of the problem, we present a scalable heuristic based on genetic algorithms. We carried out a set of experiments based on a realistic smart city scenario: the results show how the performance of the proposed heuristic is comparable with the one achieved through the solution of the optimization problem. Then, we carried out a comparison among different genetic evolution strategies and operators that identify the uniform crossover as the best option. Finally, we perform a wide sensitivity analysis to show the stability of the heuristic performance with respect to its main parameters.

Year: 2019 | Pages: 201 - 2021

ISSN: 1999-4893 | DOI: 10.3390/a12100201

Canali, Claudia; Lancellotti, Riccardo
IEEE TRANSACTIONS ON CLOUD COMPUTING
Abstract

As cloud computing data centers grow in size and complexity to accommodate an increasing number of virtual machines, the scalability of monitoring and management processes becomes a major challenge. Recent research studies show that automatically clustering virtual machines that are similar in terms of resource usage may address the scalability issues of IaaS clouds. Existing solutions provides high clustering accuracy at the cost of very long observation periods, that are not compatible with dynamic cloud scenarios where VMs may frequently join and leave. We propose a novel technique, namely AGATE (Adaptive Gray Area-based TEchnique), that provides accurate clustering results for a subset of VMs after a very short time. This result is achieved by introducing elements of fuzzy logic into the clustering process to identify the VMs with undecided clustering assignment (the so-called gray area), that should be monitored for longer periods. To evaluate the performance of the proposed solution, we apply the technique to multiple case studies with real and synthetic workloads. We demonstrate that our solution can correctly identify the behavior of a high percentage of VMs after few hours of observations, and significantly reduce the data required for monitoring with respect to state-of-the-art solutions.

Year: 2019 | Pages: 650 - 663

ISSN: 2168-7161 | DOI: 10.1109/TCC.2017.2664831

Canali, Claudia; Corbelli, Andrea; Lancellotti, Riccardo
JOURNAL OF COMMUNICATION SOFTWARE AND SYSTEMS
Abstract

The delivery of multimedia contents through a Content Delivery Network (CDN) is typically handled by a specific third party, separated from the content provider. However, in some specific cases, the content provider may be interested in carrying out this function using a Private CDN, possibly using an off-sourced network infrastructure. This scenario poses new challenges and limitations with respect to the typical case of content delivery. First, the systems has to face a different workload as the content consumer are typically part of the same organization that is the content provider. Second, the offsourced nature of the network infrastructure has a major impact on the available choices for CDN design. In this paper we develop an exact mathematical model for the design of a Private CDN addressing the issues and the constraints typical of such scenario. Furthermore, we analyze different heuristics to solve the optimization problem. We apply the proposed model to a real case study and validate the results by means of simulation.

Year: 2018 | Pages: 376 - 385

ISSN: 1845-6421 | DOI: 10.24138/jcomss.v14i4.607

Ardagna, Danilo; Canali, Claudia; Lancellotti, Riccardo
ALGORITHMS
Abstract

Modern distributed systems are becoming increasingly complex as virtualization is being applied at both the levels of computing and networking. Consequently, the resource management of this infrastructure requires innovative and efficient solutions. This issue is further exacerbated by the unpredictable workload of modern applications and the need to limit the global energy consumption. The purpose of this special issue is to present recent advances and emerging solutions to address the challenge of resource management in the context of modern large-scale infrastructures. We believe that the four papers that we selected present an up-to-date view of the emerging trends, and the papers propose innovative solutions to support efficient and self-managing systems that are able to adapt, manage, and cope with changes derived from continually changing workload and application deployment settings, without the need for human supervision.

Year: 2018 | Pages: 200 - 203

ISSN: 1999-4893 | DOI: 10.3390/a11120200

Chiaraviglio, Luca; D'Andreagiovanni, Fabio; Lancellotti, Riccardo; Shojafar, Mohammad; Blefari Melazzi, Nicola; Canali, Claudia
IEEE TRANSACTIONS ON SUSTAINABLE COMPUTING
Abstract

We target the problem of managing the power states of the servers in a Cloud Data Center (CDC) to jointly minimize the electricity consumption and the maintenance costs derived from the variation of power (and consequently of temperature) on the servers' CPU. More in detail, we consider a set of virtual machines (VMs) and their requirements in terms of CPU and memory across a set of Time Slot (TSs). We then model the consumed electricity by taking into account the VMs processing costs on the servers, the costs for transferring data between the VMs, and the costs for migrating the VMs across the servers. In addition, we employ a material-based fatigue model to compute the maintenance costs needed to repair the CPU, as a consequence of the variation over time of the server power states. After detailing the problem formulation, we design an original algorithm, called Maintenance and Electricity Costs Data Center (MECDC), to solve it. Our results, obtained over several representative scenarios from a real CDC, show that MECDC largely outperforms two reference algorithms, which instead either target the load balancing or the energy consumption of the servers.

Year: 2018 | Pages: 274 - 288

ISSN: 2377-3782 | DOI: 10.1109/TSUSC.2018.2838338

Canali, Claudia; Chiaraviglio, Luca; Lancellotti, Riccardo; Shojafar, Mohammad
IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING
Abstract

We propose a novel model, called JCDME, for the allocation of Virtual Elements (VEs), with the goal of minimizing the energy consumption in a Software-Defined Cloud Data Center (SDDC). More in detail, we model the energy consumption by considering the computing costs of the VEs on the physical servers, the costs for migrating VEs across the servers, and the costs for transferring data between VEs. In addition, JCDME introduces a weight parameter to avoid an excessive number of VE migrations. Specifically, we propose three different strategies to solve the JCDME problem with an automatic and adaptive computation of the weight parameter for the VEs migration costs. We then evaluate the considered strategies over a set of scenarios, ranging from a small sized SDDC up to a medium sized SDDC composed of hundreds of VEs and hundreds of servers. Our results demonstrate that JCDME is able to save up to an additional 7% of energy w.r.t. previous energy-aware algorithms, without a substantial increase in the solution complexity.

Year: 2018 | Pages: 580 - 595

ISSN: 2473-2400 | DOI: 10.1109/TGCN.2018.2796613

Canali, Claudia; Lancellotti, Riccardo
COMPUTING
Abstract

The success of the cloud computing paradigm is leading to a significant growth in size and complexity of cloud data centers. This growth exacerbates the scalability issues of the Virtual Machines (VMs) placement problem, that assigns VMs to the physical nodes of the infrastructure. This task can be modeled as a multi-dimensional bin-packing problem, with the goal to minimize the number of physical servers (for economic and environmental reasons), while ensuring that each VM can access the resources required in the next future. Unfortunately, the naïve bin packing problem applied to a real data center is not solvable in a reasonable time because the high number of VMs and of physical nodes makes the problem computationally unmanageable. Existing solutions improve scalability at the expense of solution quality, resulting in higher costs and heavier environmental footprint. The Class-Based placement technique (CBP) is a novel approach that exploits existing solutions to automatically group VMs showing similar behaviour. The Class-Based technique solves a placement problem that considers only some representative VMs for each class, and that can be replicated as a building block to solve the global VMs placement problem. Using real traces, we analyse our proposal performance, comparing different alternatives to automatically determine the number of building blocks. Furthermore, we compare our proposal against the existing alternatives and evaluate the results for different workload compositions. We demonstrate that the CBP proposal outperforms existing solutions in terms of scalability and VM placement quality.

Year: 2017 | Pages: 575 - 595

ISSN: 0010-485X | DOI: 10.1007/s00607-016-0498-5

Canali, Claudia; Lancellotti, Riccardo
INTERNATIONAL JOURNAL OF GRID AND UTILITY COMPUTING
Abstract

Scalability in monitoring and management of cloud data centres may be improved through the clustering of virtual machines (VMs) exhibiting similar behaviour. However, available solutions for automatic VM clustering present some important drawbacks that hinder their applicability to real cloud scenarios. For example, existing solutions show a clear trade-off between the accuracy of the VMs clustering and the computational cost of the automatic process; moreover, their performance shows a strong dependence on specific technique parameters. To overcome these issues, we propose a novel approach for VM clustering that uses Mixture of Gaussians (MoGs) together with the Kullback-Leiber divergence to model similarity between VMs. Furthermore, we provide a thorough experimental evaluation of our proposal and of existing techniques to identify the most suitable solution for different workload scenarios.

Year: 2016 | Pages: 152 - 162

ISSN: 1741-847X | DOI: 10.1504/IJGUC.2016.077489

Canali, Claudia; Lancellotti, Riccardo
JOURNAL OF COMMUNICATION SOFTWARE AND SYSTEMS
Abstract

Infrastructure as a Service cloud providers are increasingly relying on scalable and efficient Virtual Machines (VMs) placement as the main solution for reducing unnecessary costs and wastes of physical resources. However, the continuous growth of the size of cloud data centers poses scalability challenges to find optimal placement solutions. The use of heuristics and simplified server consolidation models that partially discard information about the VMs behavior represents the typical approach to guarantee scalability, but at the expense of suboptimal placement solutions. A recently proposed alternative approach, namely Class-Based Placement (CBP), divides VMs in classes with similar behavior in terms of resource usage, and addresses scalability by considering a small-scale server consolidation problem that is replicated as a building block for the whole data center. However, the server consolidation model exploited by the CBP technique suffers from two main limitations. First, it considers only one VM resource (CPU) for the consolidation problem. Second, it does not analyze the impact of the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VMs placement solution quality due to fragmentation of the physical server resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper extends the CBP server consolidation model to take into account multiple resources. Furthermore, we analyze the impact of block size on the performance of the proposed consolidation model, and we present and compare multiple strategies to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center.

Year: 2015 | Pages: 1 - 8

ISSN: 1845-6421 | DOI: 10.24138/jcomss.v11i4.94

Canali, Claudia; Lancellotti, Riccardo
AUTOMATED SOFTWARE ENGINEERING
Abstract

Cloud computing has recently emerged as a new paradigm to provide computing services through large-size data centers where customers may run their applications in a virtualized environment. The advantages of cloud in terms of flexibility and economy encourage many enterprises to migrate from local data centers to cloud platforms, thus contributing to the success of such infrastructures. However, as size and complexity of cloud infrastructures grow, scalability issues arise in monitoring and management processes. Scalability issues are exacerbated because available solutions typically consider each virtual machine (VM) as a black box with independent characteristics, which is monitored at a fine-grained granularity level for management purposes, thus generating huge amounts of data to handle. We claim that scalability issues can be addressed by leveraging the similarity between VMs in terms of resource usage patterns. In this paper, we propose an automated methodology to cluster similar VMs starting from their resource usage information, assuming no knowledge of the software executed on them. This is an innovative methodology that combines the Bhattacharyya distance and ensemble techniques to provide a stable evaluation of similarity between probability distributions of multiple VM resource usage, considering both system- and network-related data. We evaluate the methodology through a set of experiments on data coming from an enterprise data center. We show that our proposal achieves high and stable performance in automatic VMs clustering, with a significant reduction in the amount of data collected which allows to lighten the monitoring requirements of a cloud data center.

Year: 2014 | Pages: 319 - 344

ISSN: 0928-8910 | DOI: 10.1007/s10515-013-0134-y

Lancellotti, Riccardo; Canali, Claudia
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
Abstract

The growing size and complexity of cloud systems determine scalability issues for resource monitoring and management. While most existing solutions con- sider each Virtual Machine (VM) as a black box with independent characteristics, we embrace a new perspective where VMs with similar behaviors in terms of resource usage are clustered together. We argue that this new approach has the potential to address scalability issues in cloud monitoring and management. In this paper, we propose a technique to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative technique models VMs behavior exploiting the probability histogram of their resources usage, and performs smoothing-based noise reduction and selection of the most relevant information to consider for the clustering process. Through extensive evaluation, we show that our proposal achieves high and stable performance in terms of automatic VM clustering, and can reduce the monitoring requirements of cloud systems.

Year: 2014 | Pages: 2757 - 2769

ISSN: 0743-7315 | DOI: 10.1016/j.jpdc.2014.02.006

Canali, Claudia; Lancellotti, Riccardo
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
Abstract

Cloud computing has recently emerged as a leading paradigm to allow customers to run their applications in virtualized large-scale data centers. Existing solutions for monitoring and management of these infrastructures consider virtual machines (VMs) as independent entities with their own characteristics. However, these approaches suffer from scalability issues due to the increasing number of VMs in modern cloud data centers. We claim that scalability issues can be addressed by leveraging the similarity among VMs behavior in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. The innovative contribution of the proposed methodology is the use of the statistical technique known as principal component analysis (PCA) to automatically select the most relevant information to cluster similar VMs. We apply the methodology to two case studies, a virtualized testbed and a real enterprise data center. In both case studies, the automatic data selection based on PCA allows us to achieve high performance, with a percentage of correctly clustered VMs between 80% and 100% even for short time series (1 day) of monitored data. Furthermore, we estimate the potential reduction in the amount of collected data to demonstrate how our proposal may address the scalability issues related to monitoring and management in cloud computing data centers.

Year: 2014 | Pages: 38 - 52

ISSN: 1000-9000 | DOI: 10.1007/s11390-013-1410-9

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
SERVICE ORIENTED COMPUTING AND APPLICATIONS
Abstract

A main feature of Service Oriented Architectures is the capability to support the development of new applications through the composition of existing Web services that are offered by different service providers. The runtime selection of which providers may better satisfy the end-user requirements in terms of quality of service remains an open issue in the context of Web services. The selection of the service providers has to satisfy requirements of different nature: requirements may refer to static qualities of the service providers, which do not change over time or change slowly compared to the service invocation time (for example related to provider reputation), and to dynamic qualities, which may change on a per-invocation basis (typically related to performance, such as the response time). The main contribution of this paper is to propose a family of novel runtime algorithms that select service providers on the basis of requirements involving both static and dynamic qualities, as in a typical Web scenario. We implement the proposed algorithms in a prototype and compare them with the solutions commonly used in service selection, which consider all the service provider qualities as static for the scope of the selection process. Our experiments show that a static management of quality requirements is viable only in the unrealistic case where workload remains stable over time, but it leads to very poor performance in variable environments. On the other hand, the combined management of static and dynamic quality requirements allows us to achieve better user-perceived performance over a wide range of scenarios, with the response time of the proposed algorithms that is reduced up to a 50 % with respect to that of static algorithms.

Year: 2013 | Pages: 43 - 57

ISSN: 1863-2386 | DOI: 10.1007/s11761-012-0120-4

Roberto, Canonico; Canali, Claudia; Walid, Dabbous
PEER-TO-PEER NETWORKING AND APPLICATIONS
Abstract

n/a

Year: 2013 | Pages: 115 - 117

ISSN: 1936-6442 | DOI: 10.1007/s12083-012-0177-z

Canali, Claudia; Lancellotti, Riccardo
JOURNAL OF COMMUNICATION SOFTWARE AND SYSTEMS
Abstract

The recent growth in demand for modern applications combined with the shift to the Cloud computing paradigm have led to the establishment of large-scale cloud data centers. The increasing size of these infrastructures represents a major challenge in terms of monitoring and management of the system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics, and face scalability issues by reducing the number of monitored resource samples, considering in most cases only average CPU usage sampled at a coarse time granularity. We claim that scalability issues can be addressed by leveraging the similarity between VMs in terms of resource usage patterns. In this paper we propose an automated methodology to cluster VMs depending on the usage of multiple resources, both system- and network-related, assuming no knowledge of the services executed on them. This is an innovative methodology that exploits the correlation between the resource usage to cluster together similar VMs. We evaluate the methodology through a case study with data coming from an enterprise datacenter, and we show that high performance may be achieved in automatic VMs clustering. Furthermore, we estimate the reduction in the amount of data collected, thus showing that our proposal may simplify the monitoring requirements and help administrators to take decisions on the resource management of cloud computing datacenters.

Year: 2012 | Pages: 102 - 109

ISSN: 1845-6421 | DOI: 10.24138/jcomss.v8i4.164

Canali, Claudia; Colajanni, Michele; Delfina, Malandrino; Vittorio, Scarano; Raffaele, Spinelli
JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY
Abstract

Multimedia content, user mobility and heterogeneous client devices require novel systems that are able to support ubiquitous access to the Web resources. In this scenario, solutions that combine flexibility, efficiency and scalabilityin offering edge services for ubiquitous access are needed. We propose an original intermediary framework, namely Scalable Intermediary Software Infrastructure (SISI), which is able to dynamically compose edge services on the basis of user preferences and device characteristics. The SISI framework exploits a per-user profiling mechanism, where each user can initiallyset his/her personal preferences through a simple Web interface, and the system is then able to compose at run-time the necessary components. The basic framework can be enriched through new edge services that can be easily implemented through a programming model based on APIs and internal functions. Our experiments demonstrate that flexibility and edge service composition do not affect the system performance. We show that this framework is able to chain multiple edge services and to guarantee stable performance.

Year: 2012 | Pages: 281 - 297

ISSN: 1000-9000 | DOI: 10.1007/s11390-012-1223-2

Canali, Claudia; Lancellotti, Riccardo
INTERNATIONAL JOURNAL OF SOCIAL NETWORK MINING
Abstract

Social networks are gaining an increasing popularity on the Internet and are becoming a critical media for business and marketing. Hence, it is important to identify the key users that may play a critical role as sources or targets of content dissemination. Existing approaches rely only on users social connections; however, considering a single kind of information does not guarantee satisfactory results for the identification of the key users. On the other hand, considering every possible user attribute is clearly unfeasible due to huge amount of heterogenous user information. In this paper, we propose to select and combine a subset of user attributes with the goal to identify sources and targets for content dissemination in a social network. We develop a quantitative methodology based on the principal component analysis. Experiments on the YouTube and Flickr networks demonstrate that our solution outperforms existing solutions by 15%.

Year: 2012 | Pages: 27 - 50

ISSN: 1757-8485 | DOI: 10.1504/IJSNM.2012.045104

S., Burresi; Canali, Claudia; M. E., Renda; P., Santi
IEEE TRANSACTIONS ON MOBILE COMPUTING
Abstract

Wireless mesh networks are a promising area for the deployment of new wireless communication and networking technologies. In this paper, we address the problem of enabling effective peer-to-peer resource sharing in this type of networks. Starting from the well-known Chord protocol for resource sharing in wired networks, we propose a specialization that accounts for peculiar features of wireless mesh networks: namely, the availability of a wireless infrastructure, and the 1-hop broadcast nature of wireless communication, which bring to the notions of location-awareness and MAC layer cross-layering. Through extensive packet-level simulations, we investigate the separate effects of location-awareness and MAC layer cross-layering, and of their combination, on the performance of the P2P application. The combined protocol, MeshChord, reduces messageoverhead of as much as 40% with respect to the basic Chord design, while at the same time improving the information retrieval performance. Notably, differently from the basic Chord design, our proposed MeshChord specialization displays information retrieval performance resilient to the presence of both CBR and TCP background traffic. Overall, the results of our study suggest that MeshChord can be successfully utilized for implementing file/resource sharing applications in wireless mesh networks.

Year: 2010 | Pages: 333 - 347

ISSN: 1536-1233 | DOI: 10.1109/TMC.2009.134

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
MOBILE NETWORKS AND APPLICATIONS
Abstract

The success of the Mobile Web is driven by the combination of novel Web-based services with the diffusion of advanced mobile devices that require personalization, location-awareness and content adaptation. The evolutionary trend of the Mobile Web workload places unprecedented strains on the server infrastructure of the content provider at the level of computational and storage capacity, to the extent that the technological improvements at the server and client level may be insufficient to face some resource requirements of the future Mobile Web scenario. This paper presents a twofold contribution. We identify some performance bottlenecks that can limit the performance of future Mobile Web, and we propose and evaluate novel resource management strategies. They aim to address computational requirements through a pre-adaptation of the most popular resources even in the presence of irregular access patterns and short resource lifespan that will characterize the future Mobile Web. We investigate a large space of alternative workload scenarios. Our analysis allows to identify when the proposed resource management strategies are able to satisfy the computational requirements of future Mobile Web, and even some conditions where further research is necessary.

Year: 2010 | Pages: 237 - 252

ISSN: 1383-469X | DOI: 10.1007/s11036-009-0186-1

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
CLUSTER COMPUTING
Abstract

The growing demand for Web and multimedia content accessed through heterogeneous devices requires the providers to tailor resources to the device capabilities on-the-fly. Providing services for content adaptation and delivery opens two novel challenges to the present and future content provider architectures: content adaptation services are computationally expensive; the global storage requirements increase because multiple versions of the same resource may be generated for different client devices. We propose a novel two-level distributed architecture for the support of efficient content adaptation and delivery services. The nodes of the architecture are organized in two levels: thin edge nodes on the first level act as simple request gateways towards the nodes of the second level; fat interior clusters perform all the other tasks, such as content adaptation, caching and fetching. Several experimental results show that the Two-level architecture achieves better performance and scalability than that of existing flat or no cooperative architectures.

Year: 2010 | Pages: 1 - 17

ISSN: 1386-7857 | DOI: 10.1007/s10586-009-0094-y

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
IEEE INTERNET COMPUTING
Abstract

The mobile Web's widespread diffusion opens many interesting design and management issues about server infrastructures that must satisfy present and future client demand. Future mobile Web-based services will have growing computational costs. Even requests for the same Web resource will require services to dynamically generate content that takes into account specific devices, user profiles, and contexts. The authors consider the evolution of the mobile Web workload and trends in server and client devices with the goal of anticipating future bottlenecks and developing management strategies.

Year: 2009 | Pages: 60 - 68

ISSN: 1089-7801 | DOI: 10.1109/MIC.2009.43

Canali, Claudia; Cardellini, V.; Lancellotti, Riccardo
WORLD WIDE WEB JOURNAL
Abstract

The overwhelming popularity of Internet and the technology advancements have determined the diffusion of many different Web-enabled devices. In such an heterogeneous client environment, efficient content adaptation and delivery services are becoming a major requirement for the new Internet service infrastructure. In this paper we describe intermediary-based architectures that provide adaptation and delivery of Web content to different user terminals. We present the design of a Squid-based prototype that carries out the adaptation of Web images and combines such a functionality with the caching of multiple versions of the same resource. We also investigate how to provide some form of cooperation among the nodes of the intermediary infrastructure, with the goal to evaluate to what extent the cooperation in discovering, adapting, and delivering Web resources can improve the user-perceived performance.

Year: 2006 | Pages: 63 - 92

ISSN: 1085-2301 | DOI: 10.1007/s11280-005-4049-9


Conferences
Canali, C.; Faenza, F.
6th International Conference on Gender Research, ICGR 2023
Abstract

None

Year: 2023 | Pages: 65 - 73

ISSN: 2516-2810 | DOI: n/a

Pigozzi, S.; Faenza, F.; Canali, C.
2022 Conference on Practice and Experience in Advanced Research Computing: Revolutionary: Computing, Connections, You, PEARC 2022
Abstract

In the last few years, the web-based interactive computational environment called Jupyter notebook has been gaining more and more popularity as a platform for collaborative research and data analysis, becoming a de-facto standard among researchers. In this paper we present a first implementation of Sophon, an extensible web platform for collaborative research based on JupyterLab. Our aim is to extend the functionality of JupyterLab and improve its usability by integrating it with Django. In the Sophon project, we integrate the deployment of dockerized JupyterLab instances into a Django web server, creating an extensible, versatile and secure environment, while also being easy to use for researchers of different disciplines.

Year: 2022 | Pages: n/a - 4

ISBN: 9781450391610 | DOI: 10.1145/3491418.3535163

Addabbo, T.; Badalassi, G.; Bencivenga, R.; Canali, C.
4th International Conference on Gender Research, ICGR 2021
Abstract

As recently announced by the European Commission, Gender Equality Plans (GEPs) will become an eligibility criterion in the future Horizon Europe programme (2021-2027) for every legal entity (public body, research center or higher education institution). The complex process of designing a GEP in a Research Performing Organization (RPO) involves different phases. In this paper, recalling the six steps process of the European Institute for Gender Equality (EIGE) GEAR tool for developing GEPs in research institutions, we focus on the third critical step of setting up a GEP. In particular, the EIGE recommendation for an effective GEP design is to get inspiration from measures implemented by other organisations and tailor them to the specific local institutional context. However, analysing other RPO’s GEP measures is a time-consuming effort requiring at least some experience and preparation to understand and evaluate the measures replicability, impact, effectiveness and sustainability. This analysis may be a very complicated task for organisations that are not experienced with GEPs. To address this issue, the paper presents a methodology that aims at supporting RPOs in the selection of measures to be included in the institutional GEP design. The proposed methodology has been defined in the context of the LeTSGEPs Horizon 2020 project and is based on a catalogue of GEPs measures that have been experimented by European RPOs so far. The LeTSGEPs methodology and related catalogue offer a classified guide of the GEP measures' gender impact through several factors, such as: the gender issues to be addressed, the target groups, the stakeholders to be involved, the different dimensions of staff organisational well being, the output and outcome indicators, the possible sustainability strategies. The proposed catalogue may represent a tool able to facilitate RPOs evaluation and selection of measures among those already experimented by other research institutions, offering useful indication on their appropriateness to solve specific issues. At a more general level, the catalogue also provides essential information on the main measures that have been experimented so far in implementing GEPs in European RPOs, the most common areas of interest, and the capabilities involved.

Year: 2021 | Pages: 8 - 17

ISSN: 2516-2810 | DOI: 10.34190/IGR.21.042

Addabbo, T.; Badalassi, G.; Canali, C.
4th International Conference on Gender Research, ICGR 2021
Abstract

Gender Budgeting (GB) represents an important tool to reach gender equality. The aim of this paper is to refer specifically to gender equality in Research performing organisations (RPOs) and to how GB can ensure Gender Equality Plans (GEPs) sustainability. GB can offer a financial perspective of Gender Equality balance and distribution of power within the RPO, unveiling hidden bias and discriminations. The paper outlines the origin of Gender Budgeting starting from 80s to nowadays and reflects on its first implementation in public entities, at governmental and territorial level and its current implementation within RPOs, wondering whether there may be a link between GB public administration and RPOs’ GEP experiences at territorial level due to the same local attention in gender equality. In this sense, the Italian case is analysed, since this country has a long tradition of local gender budgeting implementation that arises from 2002 (reagarding so far about 137 GB local projects) and a more recent but intense engagement in GB at RPOs’ level (about 30 projects). The experience in GB at local institutional level has in fact been very important to develop the GB methodology analysis by the LeTSGEPs European Project of which Unimore is Leading partner. Such methodology has been developed starting from the Account Based Approach and the Capability approach experimented in GB projects at local level in Italy. A powerful strategy to spread GB in Academia can be considered the presence of guidelines at national level, and again Italy can allow to test this hypothesis thanks to the recent production of guidelines by national level institutions and training activities. The GB methodology allows a budget reclassification as a dashboard to adopt an overall view on every RPO’s activity having a financial evidence. By adopting a gender mainstreaming and capability approach, the GB methodology allows to evaluate the intrinsic gender impact of the activities that have been funded and the male/female stakeholders involved at different levels. Together with analysing the Italian case, this paper illustrates the methodological framework and the GB process in the RPOs as described in the LeTSGEPs methodology.

Year: 2021 | Pages: 1 - 7

ISSN: 2516-2810 | DOI: 10.34190/IGR.21.043

Canali, C.; Lancellotti, R.; Rossi, S.
20th IEEE International Symposium on Network Computing and Applications, NCA 2021
Abstract

The Fog Computing paradigm is increasingly seen as the most promising solution to support Internet of Things applications and satisfy their requirements in terms of response time and Service Level Agreements. For these applications, fog computing offers the great advantage of reducing the response time thanks to the layer of intermediate nodes able to perform pre-processing, filtering and other computational tasks. However, the design of a fog computing infrastructure opens new issues concerning the allocation of data flows coming from sensors over the fog nodes, and the choice of the number of the fog nodes to be activated. Many studies rely on a simplified assumption based on a M/M/1 theoretical queuing model to determine the optimal solution for the fog infrastructure design, but such simplification may result in a mismatch between predicted and achieved performance of the model. In this paper, we measure the aforementioned discordance in terms of response time and SLA compliance. Furthermore, we explore the impact of non-Poissonian service models and validate our results by means of simulation. Our experiments demonstrate that the use of M/M/1 model could lead to SLA violations. On the other hand, the use of sophisticated models for the estimation of the response time can avoid this problem.

Year: 2021 | Pages: 1 - 8

ISBN: 9781665495509 | DOI: 10.1109/NCA53618.2021.9685491

Faenza, F.; Canali, C.; Carbonaro, A.
4th International Conference on Gender Research, ICGR 2021
Abstract

The last Digital Economy and Society Index (DESI) report about the digital performance of the EU member states shows that a large part of the EU population still lacks basic digital skills, even though most jobs require such skills. The report also evidences the extremely low rates of females enrolled at computer sciences and information engineering academic courses, resulting not only in a massive loss of talent for companies and economies, but also perpetuating gaps in gender inequality in the ICT-related fields. To counteract these effects, the engineering and computer science departments of two Italian universities have organized, since 2014, an innovative form of summer camp, namely ‘Digital Girls’, dedicated to female students of the third and fourth grade of the high schools. The summer camp provides girls with a learning experience, based on a team-working and learn-by-doing approach, about coding applied to creative and innovative fields, such as video game programming or Arduino-controlled robot making, and with the exposition to inspiring female role models from academia and industry. For its scope, nature (free for the girls to participate) and duration (3-4 entire weeks), the summer camp Digital Girls represents a unique experience not only in Italy but also, at the best of our knowledge, in the world. The COVID-19 emergency imposed deep changes to the 2020 edition of the summer camp, that was carried out completely online and based on different activities with respect to past editions realized in presence. In this paper, we analyzed the summer camp experience through its different editions, highlighting the impact of such activity on girls attitudes and on their plans for future studies and careers. The availability of data on several and deeply different editions of the summer camp allows us to highlight pros and cons of different approaches in carrying out similar extra-curricular activities to reduce the gender gap in ICT education.

Year: 2021 | Pages: 104 - 113

ISSN: 2516-2810 | DOI: 10.34190/IGR.21.051

Faenza, F.; Canali, C.; Carbonaro, A.
Research and Innovation Forum, Rii Forum 2021
Abstract

Digital technology is increasingly central to our lives, particularly among young people. However, a severe concern remains about the existing digital gender gap: despite the demand for ICT professionals is more and more increasing, women are under-represented in the ICT sectors and their low participation rates can be traced back to school years with internalised gender stereotypes. The present work explores the summer camp “Digital girls” case study with the aim of involving women from an early age in traditionally male sectors. The summer camp is organized by the engineering and computer science departments of two Italian universities that from 2014 have decided to undertake work to disseminate and raise awareness of ICT issues to girls in high school. The summer camp offers a protected context in which participants learn the principles of computer programming and are exposed to workshops on social networks, free software, cybersecurity, and interact with female ICT experts. The paper focuses on the analysis of data relating to the correlation between previous programming experience and the choice of university studies and social perceptions of the work environment.

Year: 2021 | Pages: 193 - 205

ISSN: 2213-8684 | DOI: 10.1007/978-3-030-84311-3_18

Alves de Queiroz, T.; Canali, C.; Iori, M.; Lancellotti, R.
8th International Conference on Variable Neighborhood Search, ICVNS 2021
Abstract

The current trend of the modern smart cities applications towards a continuous increase in the volume of produced data and the concurrent need for low and predictable latency in the response time has motivated the shift from a cloud to a fog computing approach. A fog computing architecture is likely to represent a preferable solution to reduce the application latency and the risk of network congestion by decreasing the volume of data transferred to cloud data centers. However, the design of a fog infrastructure opens new issues concerning not only how to allocate the data flow coming from sensors to fog nodes and from there to cloud data centers, but also the choice of the number and the location of the fog nodes to be activated among a list of potential candidates. We model this facility location issue through a multi-objective optimization problem. We propose a heuristic based on the variable neighborhood search, where neighborhood structures are based on swap and move operations. The proposed method is tested in a wide range of scenarios, considering a smart city application’s realistic setup with geographically distributed sensors. The experimental evaluation shows that our method can achieve stable and better performance concerning other literature approaches, supporting the given application.

Year: 2021 | Pages: 28 - 42

ISBN: 9783030696245 | DOI: 10.1007/978-3-030-69625-2_3

Canali, C.; Lancellotti, R.; Mione, S.
19th IEEE International Symposium on Network Computing and Applications, NCA 2020
Abstract

The success of IoT applications increases the number of online devices and motivates the adoption of a fog computing paradigm to support large and widely distributed infrastructures. However, the heterogeneity of nodes and their connections requires the introduction of load balancing strategies to guarantee efficient operations. This aspect is particularly critical when some nodes are characterized by high communication delays. Some proposals such as the Sequential Forwarding algorithm have been presented in literature to provide load balancing in fog computing systems. However, such algorithms have not been studied for a wide range of working parameters in an heterogeneous infrastructure; furthermore, these algorithms are not designed to take advantage from highly heterogeneous network delays that are common in fog infrastructures. The contribution of this study is twofold: First, we evaluate the performance of the sequential forwarding algorithm for several load and delay conditions; second, we propose and test a delay-aware version of the algorithm that takes into account the presence of highly variable node connectivity in the infrastructure. The results of our experiments, carried out using a realistic network topology, demonstrate that a delay-blind approach to sequential forwarding may determine poor performance in the load balancing when network delay represents a major contribution to the response time. Furthermore, we show that the delay-aware variant of the algorithm may provide a benefit in this case, with a reduction in the response time up to 6%.

Year: 2020 | Pages: 1 - 8

ISBN: 9781728183268 | DOI: 10.1109/NCA51143.2020.9306730

Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
23rd ACM International Conference on Modelling, Analysis, and Simulation of Wireless and Mobile Systems, MSWiM 2020
Abstract

Fog computing infrastructures must support increasingly complex applications where a large number of sensors send data to intermediate fog nodes for processing. As the load in such applications (as in the case of a smart cities scenario) is subject to significant fluctuations both over time and space, load balancing is a fundamental task. In this paper we study a fully distributed algorithm for load balancing based on random probing of the neighbors' status. A qualifying point of our study is considering the impact of delay during the probe phase and analyzing the impact of stale load information. We propose a theoretical model for the loss of correlation between actual load on a node and stale information arriving to the neighbors. Furthermore, we analyze through simulation the performance of the proposed algorithm considering a wide set of parameters and comparing it with an approach from the literature based on random walks. Our analysis points out under which conditions the proposed algorithm can outperform the alternatives.

Year: 2020 | Pages: 123 - 127

ISBN: 9781450381178 | DOI: 10.1145/3416010.3423244

Beraldi, R.; Canali, C.; Lancellotti, R.; Mattia, G. P.
5th International Conference on Fog and Mobile Edge Computing, FMEC 2020
Abstract

The growth of large scale sensing applications (as in the case of smart cities applications) is a main driver of the fog computing paradigm. However, as the load for such fog infrastructures increases, there is a growing need for coordination mechanisms that can provide load balancing. The problem is exacerbated by local overload that may occur due to an uneven distribution of processing tasks (jobs) over the infrastructure, which is typical real application such as smart cities, where the sensor deployment is irregular and the workload intensity can fluctuate due to rush hours and users behavior. In this paper we introduce two load sharing mechanisms that aim to offload jobs towards the neighboring nodes. We evaluate the performance of such algorithms in a realistic environment that is based on a real application for monitoring in a smart city. Our experiments demonstrate that even a simple load balancing scheme is effective in addressing local hot spots that would arise in a non-collaborative fog infrastructure

Year: 2020 | Pages: 46 - 53

ISBN: 9781728172163 | DOI: 10.1109/FMEC49853.2020.9144962

De Queiroz, T. A.; Canali, C.; Iori, M.; Lancellotti, R.
10th International Conference on Cloud Computing and Services Science, CLOSER 2020
Abstract

The trend of an ever-increasing number of geographically distributed sensors producing data for a plethora of applications, from environmental monitoring to smart cities and autonomous driving, is shifting the computing paradigm from cloud to fog. The increase in the volume of produced data makes the processing and the aggregation of information at a single remote data center unfeasible or too expensive, while latency-critical applications cannot cope with the high network delays of a remote data center. Fog computing is a preferred solution as latency-sensitive tasks can be moved closer to the sensors. Furthermore, the same fog nodes can perform data aggregation and filtering to reduce the volume of data that is forwarded to the cloud data centers, reducing the risk of network overload. In this paper, we focus on the problem of designing a fog infrastructure considering both the location of how many fog nodes are required, which nodes should be considered (from a list of potential candidates), and how to allocate data flows from sensors to fog nodes and from there to cloud data centers. To this aim, we propose and evaluate a formal model based on a multi-objective optimization problem. We thoroughly test our proposal for a wide range of parameters and exploiting a reference scenario setup taken from a realistic smart city application. We compare the performance of our proposal with other approaches to the problem available in literature, taking into account two objective functions. Our experiments demonstrate that the proposed model is viable for the design of fog infrastructure and can outperform the alternative models, with results that in several cases are close to an ideal solution.

Year: 2020 | Pages: 253 - 260

DOI: 10.5220/0009324702530260

Sangiuliano, M.; Canali, C.; Gorbacheva, E.
2nd International Conference on Gender Research (ICGR)
Abstract

Gender Equality Plans (GEPs) represent a comprehensive tool to promote structural change for gender equality in research institutions. The Horizon 2020 EQUAL-IST project ("Gender Equality Plans for Information Sciences and Technology Research Institutions") supports six Informatics and Information Systems Departments at universities across Europe to initiate the design and implementation of GEPs. This paper is focused on project outcomes of the first iteration of GEP implementation (October 2017 - May 2018). Based on the internal reports provided by the involved research institutions, we classified the implemented actions as 'structural change actions' or 'preparatory actions' (following the study by Sangiuliano, Canali & Madesi, 2018) and as 'internally-oriented actions' or 'externally-oriented actions'. The implemented actions were analyzed across such intervention areas as Institutional Communication, Human Resources and Management Practices, and Teaching and Services for (Potential) Students. The conducted study addresses the need to investigate the peculiarities of GEP implementation in the Information Sciences and Technology (IST) and Information and Communications Technology (ICT) disciplines, where the gender leak in the recruitment pipeline often starts at universities already, with extremely low numbers of enrolled female students. We therefore aim at understanding if the notable amount of actions to attract more female students, which were initiated within the EQUAL-IST project during the first iteration of GEP implementation, implies a risk to bend the process towards more externally-oriented actions, which are less likely to impact internal power structures, at least in the short run. The second purpose of the paper is to explore, whether structural change actions, which have the potential to go beyond mere raising awareness on the topics at stake, tend to be concentrated in the Human Resources and Management Practices area.

Year: 2019 | Pages: 538 - 546

ISSN: 2516-2810 | DOI: n/a

Addabbo, T.; Canali, C.; Facchinetti, G.; Pirotti, T.
21st International Multi-Conference on Advanced Computer Systems, ACS 2018
Abstract

The paper proposes a fuzzy expert system for gender equality evaluation in tertiary education that has been experimented in 6 European universities in Italy, Lithuania, Finland, Germany, Portugal, Ukraine within the EQUAL-IST Horizon 2020 project with the goal to design and implement Gender Equality Plans (GEPs) for IST Research Institutions. We propose a Fuzzy Expert System (FES), a cognitive model that, by replicating the expert way of learning and thinking, allows to formalize qualitative concepts and to reach a synthetic measure of the institution’s gender equality (ranging from 0 to 1 increasing with gender equality achievements), that can be then disentangled in different dimensions. The dimensions included in the model relate to gender equality in the structure of employment (academic and non academic) and in the governance of the universities, to the equal opportunity machinery and to the work-life balance policies promoted by the institutions. The rules and weights in the system are the results of a mixed strategy composed by gender equality experts and by a participatory approach that has been promoted within the EQUAL-IST project. The results show heterogeneity in the final index of gender equality and allow to detect the more critical areas where new policies should be implemented to achieve an improvement in gender equality. The value of the final gender equality index resulting from the application of the FES is then compared to the gender equality perceived by each institution involved in the project and will be used to improve also the awareness in gender gap in important dimensions in tertiary education setting.

Year: 2019 | Pages: 109 - 121

ISSN: 2194-5357 | DOI: 10.1007/978-3-030-03314-9_10

Canali, C.; Lancellotti, R.
9th International Conference on Cloud Computing and Services Science, CLOSER 2019
Abstract

The growing popularity of the Fog Computing paradigm is driven by the increasing availability of large amount of sensors and smart devices on a geographically distributed area. The scenario of a smart city is a clear example of this trend. As we face an increasing presence of sensors producing a huge volume of data, the classical cloud paradigm, with few powerful data centers that are far away from the data sources, becomes inadequate. There is the need to deploy a highly distributed layer of data processors that filter, aggregate and pre-process the incoming data according to a fog computing paradigm. However, a fog computing architecture must distribute the incoming workload over the fog nodes to minimize communication latency while avoiding overload. In the present paper we tackle this problem in a twofold way. First, we propose a formal model for the problem of mapping the data sources over the fog nodes. The proposed optimization problem considers both the communication latency and the processing time on the fog nodes (that depends on the node load). Furthermore, we propose a heuristic, based on genetic algorithms to solve the problem in a scalable way. We evaluate our proposal on a geographic testbed that represents a smart-city scenario. Our experiments demonstrate that the proposed heuristic can be used for the optimization in the considered scenario. Furthermore, we perform a sensitivity analysis on the main heuristic parameters.

Year: 2019 | Pages: 81 - 89

DOI: 10.5220/0007699400810089

Canali, C.; Lancellotti, R.
4th International Conference on Computing, Communications and Security, ICCCS 2019
Abstract

The growing popularity of applications involving the process of a huge amount of data and requiring high scalability and low latency represents the main driver for the success of the fog computing paradigm. A set of fog nodes close to the network edge and hosting functions such as data aggregation, filtering or latency sensitive applications can avoid the risk of high latency due to geographic data transfer and network links congestion that hinder the viability of the traditional cloud computing paradigm for a class of applications including support for smart cities services or autonomous driving. However, the design of fog infrastructures requires novel techniques for system modeling and performance evaluation able to capture a realistic scenario starting from the geographic location of the infrastructure elements. In this paper we propose PAFFI, a framework for the performance analysis of fog infrastructures in realistic scenarios. We describe the main features of the framework and its capability to automatically generate realistic fog topologies, with an optimized mapping between sensors, fog nodes and cloud data centers, whose performance can be evaluated by means of simulation.

Year: 2019 | Pages: 1 - 8

ISBN: 9781728108759 | DOI: 10.1109/CCCS.2019.8888117

De Michele, R.; Fabbri, T.; Canali, C.
4th EAI International Conference on Smart Objects and Technologies for Social Good, GOODTECHS 2018
Abstract

As Web 2.0 technologies are increasingly being implemented for business purpose, they offer a wide range of opportunities and potential benefits for the enterprises that internally adopt digital social platforms. However, enterprises usually are not able to correctly and effectively evaluate their investments along this direction. To fill this gap, in this paper we propose two metrics to assess adoption and performance of enterprise social platforms, namely users (Total and Active) Participation Rate and Return on Effort and apply them on data gathered from two companies that recently adopted digital platforms including social tools for collaboration and communication among employees.

Year: 2018 | Pages: 112 - 117

ISBN: 9781450365819 | DOI: 10.1145/3284869.3284882

Canali, Claudia; Corbelli, Andrea; Lancellotti, Riccardo
26th International Conference on Software, Telecommunications and Computer Networks (SoftCOM 2018)
Abstract

Content Delivery Networks for multimedia contents are typically managed by a dedicated company. However, there are cases where an enterprise already investing in a dedicated network infrastructure wants to deploy its own private CDN. This scenario is quite different from traditional CDNs for a twofold reason: first, the workload characteristics; second, the impact on the available choices for the CDN design of having the management of the network infrastructure off-sourced to a third party. The contribution of this paper is to introduce and discuss the optimization models used to design the private CDN and to validate our models using a case study.

Year: 2018 | Pages: n/a - n/a

DOI: n/a

Canali, Claudia; Lancellotti, Riccardo; Shojafar, Mohammad
Abstract

The increasing popularity of Software-Defined Network technologies is shaping the characteristics of present and future data centers. This trend, leading to the advent of Software-Defined Data Centers, will have a major impact on the solutions to address the issue of reducing energy consumption in cloud systems. As we move towards a scenario where network is more flexible and supports virtualization and softwarization of its functions, energy management must take into account not just computation requirements but also network related effects, and must explicitly consider migrations throughout the infrastructure of Virtual Elements (VEs), that can be both Virtual Machines and Virtual Routers. Failing to do so is likely to result in a sub-optimal energy management in current cloud data centers, that will be even more evident in future SDDCs. In this chapter, we propose a joint computation-plus-communication model for VEs allocation that minimizes energy consumption in a cloud data center. The model contains a threefold contribution. First, we consider the data exchanged between VEs and we capture the different connections within the data center network. Second, we model the energy consumption due to VEs migrations considering both data transfer and computational overhead. Third, we propose a VEs allocation process that does not need to introduce and tune weight parameters to combine the two (often conflicting) goals of minimizing the number of powered-on servers and of avoiding too many VE migrations. A case study is presented to validate our proposal. We apply our model considering both computation and communication energy contributions even in the migration process, and we demonstrate that that our proposal outperforms the existing alternatives for VEs allocation in terms of energy reduction.

Year: 2018 | Pages: 1 - 1

DOI: 10.1007/978-3-319-94959-8_8

Chiaraviglio, Luca; Blefari-Melazzi, Nicola; Canali, Claudia; Cuomo, Francesca; Lancellotti, Riccardo; Shojafar, Mohammad
19th International Conference on Transparent Optical Networks, ICTON 2017
Abstract

Commodity HardWare (CHW) is currently used in the Internet to deploy large data centers or small computing nodes. Moreover, CHW will be also used to deploy future telecommunication networks, thanks to the adoption of the forthcoming network softwarization paradigm. In this context, CHW machines can be put in Active Mode (AM) or in Sleep Mode (SM) several times per day, based on the traffic requirements from users. However, the transitions between the power states may introduce fatigue effects, which may increase the CHW maintenance costs. In this paper, we perform a measurement campaign of a CHW machine subject to power state changes introduced by SM. Our results show that the temperature change due to power state transitions is not negligible, and that the abrupt stopping of the fans on hot components (such as the CPU) tends to spread the heat over the other components of the CHW machine. In addition, we also show that the CHW failure rate is reduced by a factor of 5 when the number of transitions between AM and SM states is more than 20 per day and the SM duration is around 800 [s].

Year: 2017 | Pages: 1 - 4

ISSN: 2162-7339 | DOI: 10.1109/ICTON.2017.8025001

Canali, Claudia; Lancellotti, Riccardo; Shojafar, Mohammad
CLOSER 2017 : 7th International Conference on Cloud Computing and Services Science
Abstract

Reducing energy consumption in cloud data center is a complex task, where both computation and network related effects must be taken into account. While existing solutions aim to reduce energy consumption considering separately computational and communication contributions, limited attention has been devoted to models integrating both parts. We claim that this lack leads to a sub-optimal management in current cloud data centers, that will be even more evident in future architectures characterized by Software-Defined Network approaches. In this paper, we propose a joint computation-plus-communication model for Virtual Machines (VMs) allocation that minimizes energy consumption in a cloud data center. The contribution of the proposed model is threefold. First, we take into account data traffic exchanges between VMs capturing the heterogeneous connections within the data center network. Second, the energy consumption due to VMs migrations is modeled by considering both data transfer and computational overhead. Third, the proposed VMs allocation process does not rely on weight parameters to combine the two (often conflicting) goals of tightly packing VMs to minimize the number of powered-on servers and of avoiding an excessive number of VM migrations. An extensive set of experiments confirms that our proposal, which considers both computation and communication energy contributions even in the migration process, outperforms other approaches for VMs allocation in terms of energy reduction.

Year: 2017 | Pages: n/a - n/a

DOI: 10.5220/0006231400710081

Canali, Claudia; Lancellotti, Riccardo
PERFORMANCE EVALUATION REVIEW
Abstract

Modern cloud data centers typically exploit management strategies to reduce the overall energy consumption. While most of the solutions focus on the energy consumption due to computational elements, the advent of the Software-Defined Network paradigm opens the possibility for more complex strategies taking into account the network traffic exchange within the data center. However, a network-aware Virtual Machine (VM) allocation requires the knowledge of data communication patterns, so that VMs exchanging significant amount of data can be placed on the same physical host or on low cost communication paths. In Infrastructure as a Service data centers, the information about VMs traffic exchange is not easily available unless a specialized monitoring function is deployed over the data center infrastructure. The main contribution of this paper is a methodology to infer VMs communication patterns starting from input/output network traffic time series of each VM and without relaying on a special purpose monitoring. Our reference scenario is a software-defined data center hosting a multi-tier application deployed using horizontal replication. The proposed methodology has two main goals to support a network-aware VMs allocation: first, to identify couples of intensively communicating VMs through correlation-based analysis of the time series; second, to identify VMs belonging to the same vertical stack of a multi-tier application. We evaluate the methodology by comparing different correlation indexes, clustering algorithms and time granularities to monitor the network traffic. The experimental results demonstrate the capability of the proposed approach to identify interacting VMs, even in a challenging scenario where the traffic patterns are similar in every VM belonging to the same application tier.

Year: 2017 | Pages: 49 - 56

ISSN: 0163-5999 | DOI: 10.1145/3092819.3092826

Canali, Claudia; Lancellotti, Riccardo
10th EAI International Conference on Performance Evaluation Methodologies and Tools, ValueTools 2016
Abstract

The VMs allocation over the servers of a cloud data center is becoming a critical task to guarantee energy savings and high performance. Only recently network-aware techniques for VMs allocation have been proposed. However, a network-aware placement requires the knowledge of data transfer patterns between VMs, so that VMs exchanging significant amount of information can be placed on low cost communication paths (e.g. on the same server). The knowledge of this information is not easy to obtain unless a specialized monitoring function is deployed over the data center infrastructure. In this paper, we propose a correlation-based methodology that aims to infer communication patterns starting from the network traffic time series of each VM without relaying on a special purpose monitoring. Our study focuses on the case where a data center hosts a multi-tier application deployed using horizontal replication. This typical case of application deployment makes particularly challenging the identification of VMs communications because the traffic patterns are similar in every VM belonging to the same application tier. In the evaluation of the proposed methodology, we compare different correlation indexes and we consider different time granularities for the monitoring of network traffic. Our study demonstrates the feasibility of the proposed approach, that can identify which VMs are interacting among themselves even in the challenging scenario considered in our experiments.

Year: 2017 | Pages: 251 - 254

ISBN: 9781631901416 | DOI: 10.4108/eai.25-10-2016.2268731

Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Baccarelli, Enzo
2016 IEEE Symposium on Computers and Communication, ISCC 2016
Abstract

In this paper, we propose a dynamic resource provisioning scheduler to maximize the application throughput and minimize the computing-plus-communication energy consumption in virtualized networked data centers. The goal is to maximize the energy-efficiency, while meeting hard QoS requirements on processing delay. The resulting optimal resource scheduler is adaptive, and jointly performs: i) admission control of the input traffic offered by the cloud provider; ii) adaptive balanced control and dispatching of the admitted traffic; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled virtual machines instantiated onto the virtualized data center. The proposed scheduler can manage changes of the workload without requiring server estimation and prediction of its future trend. Furthermore, it takes into account the most advanced mechanisms for power reduction in servers, such as DVFS and reduced power states. Performance of the proposed scheduler is numerically tested and compared against the corresponding ones of some state-of-the-art schedulers, under both synthetically generated and measured real-world workload traces. The results confirm the delay-vs.-energy good performance of the proposed scheduler.

Year: 2016 | Pages: 1137 - 1144

ISSN: 1530-1346 | DOI: 10.1109/ISCC.2016.7543890

Shojafar, Mohammad; Canali, Claudia; Lancellotti, Riccardo; Abolfazli, Saeid
6th International Conference on Cloud Computing and Services Science, CLOSER 2016
Abstract

In this paper, we propose an adaptive online energy-aware scheduling algorithm by exploiting the reconfiguration capability of a Virtualized Networked Data Centers (VNetDCs) processing large amount of data in parallel. To achieve energy efficiency in such intensive computing scenarios, a joint balanced provisioning and scaling of the networking-plus-computing resources is required. We propose a scheduler that manages both the incoming workload and the VNetDC infrastructure to minimize the communication-plus-computing energy dissipated by processing incoming traffic under hard real-time constraints on the per-job computing-plus-communication delays. Specifically, our scheduler can distribute the workload among multiple virtual machines (VMs) and can tune the processor frequencies and the network bandwidth. The energy model used in our scheduler is rather sophisticated and takes into account also the internal/external frequency switching energy costs. Our experiments demonstrate that the proposed scheduler guarantees high quality of service to the users respecting the service level agreements. Furthermore, it attains minimum energy consumptions under two real-world operating conditions: a discrete and finite number of CPU frequencies and not negligible VMs reconfiguration costs. Our results confirm that the overall energy savings of data center can be significantly higher with respect to the existing solutions.

Year: 2016 | Pages: 387 - 397

ISBN: 9789897581823 | DOI: 10.5220/0005928903870397

Canali, Claudia; Lancellotti, Riccardo
IEEE 4th Symposium on Network Cloud Computing and Applications
Abstract

A major challenge of IaaS cloud data centers is the placement of a huge number of Virtual Machines (VMs) over a physical infrastructure with a high number of nodes. The VMs placement process must strive to reduce as much as possible the number of physical nodes to improve management efficiency, reduce energy consumption and guarantee economical savings. However, since each VM is considered as a black box with independent characteristics, the VMs placement task presents scalability issues due to the amount of involved data and to the resulting number of constraints in the underlying optimization problem. For large data centers, this condition often leads to the impossibility to reach an optimal solution for VMs placement. Existing solutions typically exploit heuristics or simplified formulations to solve the placement problem, at the price of possibly sub-optimal solutions. We propose an innovative VMs placement technique, namely Class-Based, that takes advantage from existing solutions to automatically group VMs showing similar behavior. The Class-Based technique solves a placement problem that considers only some representatives for each class, and that can be replicated as a building block to solve the global VMs placement problem. Our experiments demonstrate that the proposed technique is viable and can significantly improve the scalability of the VMs placement in IaaS Cloud systems with respect to existing alternatives.

Year: 2015 | Pages: n/a - n/a

ISBN: 9781467377416 | DOI: 10.1109/NCCA.2015.13

Canali, Claudia; Lancellotti, Riccardo
4th International Conference on Green It Solutions (ICGREEN 2015)
Abstract

The management of IaaS cloud systems is a challenging task, where a huge number of Virtual Machines (VMs) must be placed over a physical infrastructure with multiple nodes. Economical reasons and the need to reduce the ever-growing carbon footprint of modern data centers require an efficient VMs placement that minimizes the number of physical required nodes. As each VM is considered as a black box with independent characteristics, the placement process presents scalability issues due to the amount of involved data and to the resulting number of constraints in the underlying optimization problem. For large data centers, this excludes the possibility to reach an optimal allocation. Existing solutions typically exploit heuristics or simplified formulations to solve the allocation problem, at the price of possibly sub-optimal solutions. We introduce a novel placement technique, namely Class-Based, that exploits available solutions to automatically group VMs showing similar behavior. The Class-Based technique solves a placement problem that considers only some representatives for each class, and that can be replicated as a building block to solve the global VMs placement problem. Our experiments demonstrate that the proposed technique is a viable solution that can significantly improve the scalability of the VMs placement in IaaS Cloud systems with respect to existing alternatives.

Year: 2015 | Pages: 43 - 48

ISBN: 9789897581533 | DOI: n/a

Canali, Claudia; Lancellotti, Riccardo
23rd International Conference on Software, Telecommunications and Computer Networks, SoftCOM 2015
Abstract

A critical task in the management of Infrastructure as a Service cloud data centers is the placement of Virtual Machines (VMs) over the infrastructure of physical nodes. However, as the size of data centers grows, finding optimal VM placement solutions becomes challenging. The typical approach is to rely on heuristics that improve VM placement scalability by (partially) discarding information about the VM behavior. An alternative approach providing encouraging results, namely Class-Based Placement (CBP), has been proposed recently. CBP considers VMs divided in classes with similar behavior in terms of resource usage. This technique can obtain high quality placement because it considers a detailed model of VM behavior on a per-class base. At the same time, scalability is achieved by considering a small-scale VM placement problem that is replicated as a building block for the whole data center. However, a critical parameter of CBP technique is the number (and size) of building blocks to consider. Many small building blocks may reduce the overall VM placement solution quality due to fragmentation of the physical node resources over blocks. On the other hand, few large building blocks may become computationally expensive to handle and may be unsolvable due to the problem complexity. This paper addresses this problem analyzing the impact of block size on the performance of the VM class-based placement. Furthermore, we propose an algorithm to estimate the best number of blocks. Our proposal is validated through experimental results based on a real cloud computing data center.

Year: 2015 | Pages: 290 - 294

ISBN: 9789532900569 | DOI: 10.1109/SOFTCOM.2015.7314075

Canali, Claudia; Lancellotti, Riccardo
23rd IEEE International WETICE Conference, WETICE 2014
Abstract

Identification of VMs exhibiting similar behavior can improve scalability in monitoring and management of cloud data centers. Existing solutions for automatic VM clustering may be either very accurate, at the price of a high computational cost, or able to provide fast results with limited accuracy. Furthermore, the performance of most solutions may change significantly depending on the specific values of technique parameters. In this paper, we propose a novel approach to model VM behavior using Mixture of Gaussians (MoGs) to approximate the probability density function of resources utilization. Moreover, we exploit the Kullback-Leibler divergence to measure the similarity between MoGs. The proposed technique is compared against the state of the art through a set of experiments with data coming from a private cloud data center. Our experiments show that the proposed technique can provide high accuracy with limited computational requirements. Furthermore, we show that the performance of our proposal, unlike the existing alternatives, does not depend on any parameter

Year: 2014 | Pages: 137 - 142

ISSN: 1524-4547 | DOI: 10.1109/WETICE.2014.57

Canali, Claudia; Lancellotti, Riccardo
19th IEEE Symposium on Computers and Communications, ISCC 2014
Abstract

Supporting the emerging digital society is creating new challenges for cloud computing infrastructures, exacerbating scalability issues regarding the processes of resource monitoring and management in large cloud data centers. Recent research studies show that automatically clustering similar virtual machines running the same software component may improve the scalability of the monitoring process in IaaS cloud systems. However, to avoid misclassifications, the clustering process must take into account long time series (up to weeks) of resource measurements, thus resulting in a mechanism that is slow and not suitable for a cloud computing model where virtual machines may be frequently added or removed in the data center. In this paper, we propose a novel methodology that dynamically adapts the length of the time series necessary to correctly cluster each VM depending on its behavior. This approach supports a clustering process that does not have to wait a long time before making decisions about the VM behavior. The proposed methodology exploits elements of fuzzy logic for the dynamic determination of time series length. To evaluate the viability of our solution, we apply the methodology to a case study considering different algorithms for VMs clustering. Our results confirm that after just 1 day of monitoring we can cluster without misclassifications up to 80% of the VMs, while for the remaining 20% of the VMs longer observations are needed.

Year: 2014 | Pages: na - na

ISSN: 1530-1346 | DOI: 10.1109/ISCC.2014.6912613

Canali, Claudia; Lancellotti, Riccardo
2013 International Workshop on Multi-Cloud Applications and Federated Clouds, MultiCloud 2013
Abstract

Size and complexity of modern data centers pose scalability issues for the resource monitoring system supporting management operations, such as server consolidation. When we pass from cloud to multi-cloud systems, scalability issues are exacerbated by the need to manage geographically distributed data centers and exchange monitored data across them. While existing solutions typically consider every Virtual Machine (VM) as a black box with independent characteristics, we claim that scalability issues in multi-cloud systems could be addressed by clustering together VMs that show similar behaviors in terms of resource usage. In this paper, we propose an automated methodology to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative methodology exploits the Bhattacharyya distance to measure the similarity of the probability distributions of VM resources usage, and automatically selects the most relevant resources to consider for the clustering process. The methodology is evaluated through a set of experiments with data from a cloud provider. We show that our proposal achieves high and stable performance in terms of automatic VM clustering. Moreover, we estimate the reduction in the amount of data collected to support system management in the considered scenario, thus showing how the proposed methodology may reduce the monitoring requirements in multi-cloud systems.

Year: 2013 | Pages: 45 - 52

ISBN: 9781450320504 | DOI: 10.1145/2462326.2462337

Canali, Claudia; Lancellotti, Riccardo
2012 20th International Conference on Software, Telecommunications and Computer Networks, SoftCOM 2012
Abstract

The size of modern datacenters supporting cloud computing represents a major challenge in terms of monitoring and management of system resources. Available solutions typically consider every Virtual Machine (VM) as a black box each with independent characteristics and face scalability issues by reducing the number of monitoring re- source samples, considering in most cases only average CPU utilization of VMs sampled at a very coarse time granularity. We claim that better management without compromising scalability could be achieved by clustering together VMs that show similar behavior in terms of resource utilization. In this paper we propose an automated methodology to cluster VMs depending on the utilization of their resources, assuming no knowledge of the services executed on them. The methodology considers several VM resources, both system- and network-related, and exploits the correlation between the resource demand to cluster together similar VMs. We apply the proposed methodology to a case study with data coming from an enterprise datacenter to evaluate the accuracy of VMs clustering and to estimate the reduction in the amount of data collected. The automatic clustering achieved through our approach may simplify the monitoring requirements and help administrators to take decisions on the management of the resources in a cloud computing datacenter.

Year: 2012 | Pages: n/a - n/a

ISBN: 9789532900354 | DOI: n/a

Lancellotti, Riccardo; Andreolini, Mauro; Canali, Claudia; Colajanni, Michele
35th Annual IEEE International Computer Software and Applications Conference, COMPSAC 2011
Abstract

Service providers of Web-based services can take advantage ofmany convenient features of cloud computing infrastructures, but theystill have to implement request management algorithms that are able toface sudden peaks of requests. We consider distributed algorithmsimplemented by front-end servers to dispatch and redirect requests amongapplication servers. Current solutions based on load-blind algorithms, orconsidering just server load and thresholds are inadequate to cope with thedemand patterns reaching modern Internet application servers. In thispaper, we propose and evaluate a request management algorithm, namelyPerformanceGain Prediction, that combines several pieces ofinformation (server load, computational cost of a request, usersession migration and redirection delay) to predict whether theredirection of a request to another server may result in a shorterresponse time. To the best of our knowledge, no other studycombines information about infrastructure status, user requestcharacteristics and redirection overhead for dynamic requestmanagement in cloud computing. Our results showthat the proposed algorithm is able to reduce the responsetime with respect to existing request management algorithmsoperating on the basis of thresholds.

Year: 2011 | Pages: 401 - 406

ISSN: 0730-3157 | DOI: 10.1109/COMPSAC.2011.59

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
Services and Open Source - SOS’2011 - co-located with EGC 2011
Abstract

The amount of information that is possible to gather from social networks may be useful to different contexts ranging from marketing to intelligence. In this paper, we describe the three main techniques for data acquisition in social networks, the conditions under which they can be applied, and the open problems.We then focus on the main issues that crawlers have to address for getting data from social networks, and we propose a novel solution that exploits the cloud computing paradigm for crawling. The proposed crawler is modular by design and relies on a large number of distributed nodes and on the MapReduce framework to speedup the data collection process from large social networks.

Year: 2011 | Pages: n/a - n/a

ISBN: 9782705681128 | DOI: n/a

Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
2010 IEEE International Workshop on Business Applications of Social Network Analysis, BASNA 2010
Abstract

Social networks are gaining an increasing popularity on the Internet, with tens of millions of registered users and an amount of exchanged contents accounting for a large fraction of the Internet traffic. Due to this popularity, social networks are becoming a critical media for business and marketing, as testified by viral advertisement campaigns based on such networks. To exploit the potential of social networks, it is necessary to classify the users in order to identify the most relevant ones.For example, in the context of marketing on social networks, it is necessary to identify which users should be involved in an advertisement campaign.However, the complexity of social networks, where each user is described by a large number of attributes, transforms the problem of identifying relevant users in a needle in a haystack problem. Starting from a set of user attributes that may be redundant or do not provide significant information for our analysis, we need to extract a limited number of meaningful characteristics that can be used to identify relevant users.We propose a quantitative methodology based on Principal Component Analysis (PCA) to analyze attributes and extract characteristics of social network users from the initial attribute set. The proposed methodology can be applied to identify relevant users in social network for different types of analysis. As an application, we present two case studies that show how the proposed methodology can be used to identify relevant users for marketing on the popular YouTube network. Specifically, we identify which users may play a key role in the content dissemination and how users may be affected by different dissemination strategies.

Year: 2010 | Pages: n/a - n/a

ISBN: 9781424479313 | DOI: 10.1109/BASNA.2010.5730307

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
10th Int'l conference on computer and information technology (CIT-2010)
Abstract

Identifying the set of resources that are expected to receive the majority of requests in the near future, namely hot set, is at the basis of most content management strategies of any Web-based service. Here we consider social network services that open interesting novel challenges for the hot set identification. Indeed, social connections among the users and variable user access patterns with continuous operations of resource upload/download determine a highly variable and dynamic context for the stored resources. We propose adaptive algorithms that combine predictive and social information, and dynamically adjust their parameters according to continuously changing workload characteristics. A large set of experimental results show that adaptive algorithms can achieve performance close to theoretical ideal algorithms and, even more important, they guarantee stable results for a wide range of workload scenarios.

Year: 2010 | Pages: N/A - N/A

ISBN: 9780769541082 | DOI: 10.1109/CIT.2010.55

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
15th IEEE Symposium on Computers and Communications, ISCC 2010
Abstract

Social networks have changed the characteristics of the traditional Web and these changes are still ongoing. Nowadays, it is impossible to design valid strategies for content management, information dissemination and marketing in the context of a social network system without considering the popularity of its content and the characteristics of the relations among its users. By analyzing two popular social networks and comparing current results with studies dating back to 2007 we confirm some previous results and we identify novel trends that can be utilized as a basis for designing appropriate content and system management strategies.Our analyses confirm the growth of the two social networks in terms of quantity of contents and numbers of social links among the users. The social navigation is having an increasing influence on the content popularity because the social links are representing a primary method through which the users search and find contents. An interesting novel trend emerging from our study is that subsets of users have major impact on the content popularity with respect to previous analyses, with evident consequences on the possibility of implementing content dissemination strategies, such as viral marketing.

Year: 2010 | Pages: 750 - 756

ISSN: 1530-1346 | DOI: 10.1109/ISCC.2010.5546710

D., Blough; Canali, Claudia; G., Resta; P., Santi
12th ACM International Conference on Modeling, Analysis, and Simulation of Wireless and Mobile Systems, MSWiM'09
Abstract

It is common practice in wireless multihop network evaluations to ignore interfering signals below a certain signal strength threshold. This paper investigates the thesis that this produces highly inaccurate evaluations in many cases. We start by dening a bounded version of the physical interference model, in which interference generated by transmitters located beyond a certain distance from a receiver is ignored. We then derive a lower bound on neglected interference and show that it is approximately two orders of magnitude greater than the noise floor for typical parameter values and a surprisingly small number of nodes. We next evaluate the effect of neglected interference through extensive simulations done with a widely-used packet-level simulator (GTNetS), considering 802.11 MAC with both CBR and TCP traffic in networks of varying size and topology. The results of these simulations show very large evaluation errors when neglecting far-away interference: errors in evaluating aggregate throughput when using the default interference model reached up to 210% with 100 nodes, and errors in individual flow throughputs were far greater.

Year: 2009 | Pages: 90 - 95

ISBN: 9781605586168 | DOI: 10.1145/1641804.1641821

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
2009 33rd Annual IEEE International Computer Software and Applications Conference, COMPSAC 2009
Abstract

Several operations of Web-based applications areoptimized with respect to the set of resources that will receivethe majority of requests in the near future, namely the hotset. Unfortunately, the existing algorithms for the hot setidentification do not work well for the emerging social networkapplications, that are characterized by quite novel featureswith respect to the traditional Web: highly interactive useraccesses, upload and download operations, short lifespan ofthe resources, social interactions among the members of theonline communities.We propose and evaluate innovative combinations of predictivemodels and social-aware solutions for the identificationof the hot set. Experimental results demonstrate that some ofthe considered algorithms improve the accuracy of the hot setidentification up to 30% if compared to existing models, andthey guarantee stable and robust results even in the context ofsocial network applications characterized by high variability.

Year: 2009 | Pages: 280 - 285

ISSN: 0730-3157 | DOI: 10.1109/COMPSAC.2009.44

Burresi, S; Canali, Claudia; RENDA M., E; Santi, P.
6th Annual IEEE International Conference on Pervasive Computing and Communications, PerCom 2008
Abstract

Wireless mesh networks are a promising area for the deploymentof new wireless communication and networkingtechnologies. In this paper, we address the problem of enablingeffective peer-to-peer resource sharing in this type ofnetworks. Starting from the well-known Chord protocol forresource sharing in wired networks, we propose a specialization(called MESHCHORD) that accounts for peculiarfeatures of wireless mesh networks: namely, the availabilityof a wireless infrastructure, and the 1-hop broadcast natureof wireless communication. Through extensive packet-levelsimulations, we show that MESHCHORD reduces messageoverhead of as much as 40% with respect to the basic Chorddesign, while at the same time improving the informationretrieval performance.

Year: 2008 | Pages: 206 - 212

ISBN: 9780769531137 | DOI: 10.1109/PERCOM.2008.25

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
4th IEEE International Conference on Wireless and Mobile Computing, Networking and Communication, WiMob 2008
Abstract

The great diffusion of Mobile Web-enabled devices allows the implementation of novel personalization, location and adaptation services that will place unprecedented strains on the server infrastructure of the content provider. This paper has a twofold contribution. First, we analyze the five-years trend of Mobile Web-based applications in terms of workload characteristics of the most popular services and their impact on the server infrastructures. As the technological improvements at the server level in the same period of time are insufficient to face the computational requirements of the future Mobile Web-based services, we propose and evaluate adequate resource management strategies. We demonstrate that pre-adaptating a small fraction of the most popular resources can reduce the response time up to one third thus facing the increased computational impact of the future Mobile Web services.

Year: 2008 | Pages: 172 - 177

ISBN: 9780769533933 | DOI: 10.1109/WiMob.2008.94

Canali, Claudia; RENDA M., E; Santi, P.
2008 5th IEEE International Conference on Mobile Ad-Hoc and Sensor Systems, MASS 2008
Abstract

Wireless mesh networks are a promising area for thedeployment of new wireless communication and networkingtechnologies. In this paper, we address the problemof enabling effective peer-to-peer resource sharing in thistype ofnetworks. In particular, we consider the well-knownChord protocol for resource sharing in wired networks andthe recently proposed MeshChord specialization for wirelessmesh networks, and compare their performance undervarious network settings for what concerns total generatedtraffic and load balancing. Both iterative and recursive keylookup implementation in Chord/MeshChord are consideredin our extensive performance evaluation. The resultsconfirm superiority of MeshChord with respect to Chord,and show that recursive key lookup is to be preferred whenconsidering communication overhead, while similar degreeofload unbalancing is observed. However, recursive lookupimplementation reduces the efficacy of MeshChord crosslayerdesign with respect to the original Chord algorithm.MeshChord has also the advantage of reducing load unbalancingwith respect to Chord, although a moderate degreeof load unbalancing is still observed, leaving room for furtherimprovement ofthe MeshChord design.

Year: 2008 | Pages: 603 - 609

ISBN: 9781424425747 | DOI: 10.1109/MAHSS.2008.4660096

Canali, Claudia; Lancellotti, Riccardo; Sanchez, J.
7th IEEE International Symposium on Network Computing and Applications (NCA'08)
Abstract

The evolution of Internet is heading towards a new generation of social networking services that are characterized by novel access patterns determined by social interactions among the users and by a growing amount of multimedia content involved in each user interaction. The impact of these novel services on the underlying Web infrastructures is significantly different from traditionalWeb-based services and has not yet been widely studied. This paper presents a scalability and bottleneck analysis of a Web system supporting social networking services for different scenarios of user interaction patterns, amount of multimedia content and network characteristics. Our study demonstrates that for some social networking services the user interaction patterns may play a fundamental role in the definition of the bottleneck resource and must be considered in the design of systems supporting novel applications.

Year: 2008 | Pages: N/A - N/A

ISBN: 9780769531922 | DOI: n/a

Canali, C.; Garcia, J. D.; Lancellotti, R.
7th IEEE International Symposium on Networking Computing and Applications, NCA 2008
Abstract

The last generation ofWeb is characterized by social networking services where users exchange a growing amount of multimedia content. The impact of these novel services on the underlying Web infrastructures is significantly different from traditional Web-based services and has not yet been widely studied. This paper presents a scalability and bottleneck analysis of a Web system supporting social networking services for different scenarios of user interaction patterns, amount of multimedia content and network characteristics. Our study demonstrates that for some social networking services the user interaction patterns may play a fundamental role in the definition of the bottleneck resource and must be considered in the design of systems supporting novel services. © 2008 IEEE.

Year: 2008 | Pages: 160 - 167

ISBN: 9780769531922 | DOI: 10.1109/NCA.2008.34

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
3rd IEEE International Conference on Wireless and Mobile Computing, Networking and Comm. (WIMOB 2007)
Abstract

Personalized services are a key feature for the success of the next generation Web that is accessed by heterogeneous and mobile client devices. The need to provide high performance and to preserve user data privacy opens a novel dimension in the design of infrastructures and request dispatching algorithms to support personalized services for the Mobile Web. Performance issues are typically addressed by distributed architectures consisting of multiple nodes. Personalized services that are often based on sensitive user information may introduce constraints on the service location when the nodes of the distributed architecture do not provide the same level of security. In this paper, we propose an infrastructure and related dispatching algorithms that aim to combine performance and privacy requirements. The proposed scheme may efficiently support personalized services for the Mobile Web especially if compared with existing solutions that separately address performance and privacy issues. Our proposal guarantees that up to the 97% of the requests accessing sensitive user information are assigned to the most secure nodes with limited penalty consequences on the response time.

Year: 2007 | Pages: 66 - N/A

ISBN: 9780769528892 | DOI: 10.1109/WIMOB.2007.4390860

Andreolini, Mauro; Canali, Claudia; Lancellotti, Riccardo
6th IEEE International Symposium on Network Computing and Applications, NCA 2007
Abstract

The advent of the mobileWeb and the increasing demand for personalized contents arise the need for computationally expensive services, such as dynamic generation and on-the-fly adaptation of contents. Providing these services exacerbates the performance issues that have to be addressed by the underlying Web architecture. When performance issues are addressed through geographically distributed Web systems with a large number of nodes located on the network edge, the dispatching mechanism that distributes requests among the system nodes becomes a critical element. In this paper, we investigate how the granularity of request dispatching may affect the performance of a distributed Web system for personalized contents. Through a real prototype, we compare dispatching mechanisms operating at various levels of granularity for different workload and network scenarios. We demonstrate that the choice of the best granularity for request dispatching strongly depends on the characteristics of the workload in terms of heterogeneity and computational requirements. A coarsegrain dispatching is preferable only when the requests have similar computational requirements. In all other instances of skewed workloads, that we can consider more realistic, a fine-grain dispatching augments the control on the node load and allows the system to achieve better performance.

Year: 2007 | Pages: 45 - 52

ISBN: 9780769529226 | DOI: 10.1109/NCA.2007.28

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
2nd International Conference on Automated Production of Cross Media Content for Multi-Channel Distribution, AXMEDIS 2006
Abstract

The increasing heterogeneity of mobile client devices used to access the Web requires run-time adaptations of the Web contents. A significant trend in these content adaptation services is the growing amount of personalization required by users. Personalized services are and will be a key feature for the success of the next generation Web, but they open two critical issues: performance and profile management. Issues related to the performance of adaptation services are typically addressed by highly distributed architectures with a large number of nodes located closer to user. On the other hand, the management of user profile must take into account the nature of these data that may contain sensitive information, such as geographic position, navigation history and personal preferences that should be kept private.In this paper, we propose a distributed architecture for the ubiquitous Web access that provides high performance, while addressing the privacy issues related to the management of sensitive user information. The proposed distributed-core architecture splits the adaptation services over multiple nodes distributed over a two-level topology, thus exploiting parallel adaptations to improve the user perceived performance.

Year: 2006 | Pages: 11 - 18

ISBN: 9788884535269 | DOI: 10.1109/AXMEDIS.2006.24

Canali, Claudia; Lancellotti, Riccardo
Int'l Workshop on Advanced Architectures and Algorithms for Internet Delivery and Apps (AAA-IDEA)
Abstract

The growing popularity of mobile and location aware devices allows the deployment of infomobility systems that provide access to information and services for the support of user mobility. Current systems for infomobility services assume that most information is already available on the mobile device and the device connectivity is used for receiving critical messages from a central server. However, we argue that the next generation of infomobility services will be characterized by collaboration and interaction among the users, provided through real-time bidirectional communication between the client devices and the infomobility system.In this paper we propose an innovative architecture to support next generation infomobility services providing interaction and collaboration among themobile users that can travel by several different transportation means, ranging from cars to trains to foot. We discuss the design issues of the architecture, with particular emphasis on scalability, availability and user data privacy, which are critical in a collaborative infomobility scenario.

Year: 2006 | Pages: N/A - N/A

ISBN: 9781595935052 | DOI: n/a

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
11th IEEE Symposium on Computers and Communications, ISCC 2006
Abstract

The popularity of ubiquitous Web access requires runtime adaptations of the Web contents. A significant trend in these content adaptation services is the growing amount of personalization required by users. Personalized services are and will be a key feature for the success of the ubiquitous Web, but they open two critical issues: performance and profile management. Issues related to the performance of adaptation services are typically addressed by highly distributed architectures with a large number of nodes located closer to user. On the other hand, the management of user profile must take into account the nature of these data that may contain sensitive information, such as geographic position, navigation history and personal preferences that should be kept private.In this paper, we investigate the impact that a correct profile management has on distributed infrastructures that provide content adaptation services for ubiquitous Web access. In particular, we propose and compare two scalable solutions of adaptation services deployed on the nodes of a two-level topology. We study, through real prototypes, the performance and the constraints that characterize the proposed architectures.

Year: 2006 | Pages: 774 - 780

ISSN: 1530-1346 | DOI: 10.1109/ISCC.2006.62

Canali, C.; Lancellotti, R.
2nd International Workshop on Advanced Architectures and Algorithms for Internet Delivery and Applications
Abstract

The growing popularity of mobile and location aware devices allows the deployment of infomobility systems that provide access to information and services for the support of user mobility. Current systems for infomobility services assume that most information is already available on the mobile device and the device connectivity is used for receiving critical messages from a central server. However, we argue that the next generation of infomobility services will be characterized by collaboration and interaction among the users, provided through real-time bidirectional communication between the client devices and the infomobility system.In this paper we propose an innovative architecture to support next generation infomobility services providing interaction and collaboration among the mobile users that can travel by several different transportation means, ranging from cars to trains to foot. We discuss the design issues of the architecture, with particular emphasis on scalability, availability and user data privacy, which are critical in a collaborative infomobility scenario. Copyright 2006 ACM.

Year: 2006 | Pages: 5 - n/a

ISBN: 1595935053 | DOI: 10.1145/1190183.1190189

Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
5th Symposium on Applications and the Internet, SAINT'2005
Abstract

The complexity of services provided through the Web iscontinuously increasing as well as the variety of new devicesthat are gaining access to the Internet. Tailoring Weband multimedia resources to meet the user and client requirementsopens two main novel issues in the research areaof content delivery. The working set tends to increase substantiallybecause multiple versions may be generated fromthe same original resource. Moreover, the content adaptationoperations may be computationally expensive. In thispaper, we consider third-party infrastructures composed bya geographically distributed system of intermediary and cooperativenodes that provide fast content adaptation anddelivery of Web resources. We propose a novel distributedarchitecture of intermediary nodes which are organized intwo levels. The front-end nodes in the first tier are thin edgeservers that locate the resources and forward the client requeststo the nodes in the second tier. These interior nodesare fat servers that run the most expensive functions such ascontent adaptation, resource caching and fetching. Throughreal prototypes we compare the performance of the proposedtwo-level architecture to that of alternative one-levelinfrastructures where all nodes are fat peers providing theentire set of functions.

Year: 2005 | Pages: 132 - 139

ISBN: 9780769522623 | DOI: 10.1109/SAINT.2005.10

Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
1st International Workshop on Advanced Architectures and Algorithms for Internet Delivery and Applications, AAA-IDEA 2005
Abstract

The complexity of services provided through theWeb is con-tinuously increasing and issues introduced by both heteroge-neous client devices and Web content personalization are be-coming a major challenge for the Web. Tailoring Web andmultimedia resources tomeet the user and client requirementsopens twomain novel issues in the research area of content de-livery. The content adaptation operations may be computa-tionally expensive, requiring high efficiency and scalability intheWeb architectures.Moreover, personalization services in-troduce security and consistency issues for user profile infor-mation management. In this paper, we propose a novel dis-tributed architecture, with four variants, for the efficient de-livery of personalized service where the nodes are organizedin two levels.We discuss how the architectural choices are af-fected by security and consistency constraints as well as by theaccess to privileged information of the content provider.More-over we discuss performance trade-offs of the various choices.

Year: 2005 | Pages: 50 - 57

ISBN: 9780769525259 | DOI: 10.1109/AAA-IDEA.2005.2

Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo
25th IEEE International Conference on Distributed Computing Systems Workshops, ICDCS 2005
Abstract

The increasing popularity of heterogeneous Webenableddevices and wired/wireless connections motivatesthe diffusion of content adaptation services that enrich thetraditional Web. Different solutions have been proposedfor the deployment of efcient adaptation and delivery services:in this paper we focus on intermediate infrastructuresthat consist of multiple server nodes. We investigatewhen it is really convenient to place this distributed infrastructurecloser to the clients or to the origin servers,and which is the real gain that can be get by node cooperation.We evaluate the system performance through threeprototypes that are placed in a WAN-emulated environmentand are subject to two types of workload.

Year: 2005 | Pages: 331 - 337

ISBN: 9780769523286 | DOI: 10.1109/ICDCSW.2005.109

Canali, Claudia; Casolari, Sara; Lancellotti, Riccardo
1st International Conference on High Performance Computing and Communcations, HPCC 2005
Abstract

The ubiquitous Web will require many adaptation and personalization serviceswhich will be consumed by an impressive amount of dierent devices and classes ofusers. These novel advanced services will stress the content provider platforms in anunprecedented way with respect to the content delivery seen in the last decade. Mostservices such as multimedia content manipulation (images, audio and video clips) arecomputationally expensive and no single server will be able to provide all of them,hence scalable distributed architectures will be the common basis for the delivery platform.Moreover these platforms would even address novel content management issuesthat are related to the replication and to the consistency and privacy requirements ofuser/client information. In this paper we propose two scalable distributed architecturesthat are based on a two-level topology. We investigate the pros and cons of sucharchitectures from both a security, consistency and performance points of view.

Year: 2005 | Pages: 1070 - 1076

ISSN: 0302-9743 | DOI: 10.1007/11557654_119

Canali, Claudia; Cardellini, V.; Colajanni, Michele; Lancellotti, Riccardo
International Symposium on Performance Evaluation of Computer and Telecommunication Systems (SPECTS 2004)
Abstract

Content Distribution Networks (CDNs) are a class of successful content delivery architectures used by the most popular Web sites to enhance their performance. The basic idea is to address Internet bottleneck issues by replicating and caching the content of the customer Web sites and to serve it from the edge of the network. In this paper we evaluate to what extent the use of a CDN can improve the user-perceived response time. We consider a large set of scenarios with different network conditions and client connections, that have not been examined in previous studies. We found that CDNs can offer significative performance gain in normal network conditions, but the advantage of using CDNs can be reduced by heavy network traffic. Moreover, if CDN usage is not carefully designed, the achieved speedup can be suboptimal.

Year: 2004 | Pages: N/A - N/A

ISBN: 9781565552487 | DOI: n/a

Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
WWW'03
Abstract

The Web is rapidly evolving towards a highly heterogeneous accessed environment, due to the variety of new devices with diverse capabilities and network interfaces. Hence, there is an increasing demand for solutions that enable the transformation of Web content for adapting and delivering it to diverse destination devices.We investigate different schemes for cooperative proxy caching and transcoding that can be implemented in the existing Web infrastructure and compare their performance through prototypes that extend Squid operations to an heterogeneous client environment.

Year: 2003 | Pages: n/a - n/a

ISBN: 9781581136807 | DOI: n/a

Canali, Claudia; Cardellini, V; Colajanni, Michele; Lancellotti, Riccardo; Yu, P. S.
8th Interantional Workshop on Web Content Caching and Distribution
Abstract

A clear trend of the Web is that a variety of new consumer deviceswith diverse computing powers, display capabilities, andwired/wireless network connections is gaining access to the Internet.Tailoring Web content to match the device characteristicsrequires functionalities for content transformation, namelytranscoding, that are typically carried out by the content Webserver or by an edge proxy server. In this paper, we explorehow to improve the user response time by considering systemsof cooperative edge servers which collaborate in discovering,transcoding, and delivering multiple versions of Web objects.The transcoding functionality opens an entirely new space ofinvestigation in the research area of distributed cache cooperation,because it transforms the proxy servers from contentrepositories along the client-server path into pro-active networkelements providing computation and adaptive delivery.We propose and investigate different algorithms for cooperativediscovery, delivery, and transcoding in the context of edgeservers organized in hierarchical and flat peer-to-peer topologies.We compare the performance of the proposed schemesthrough ColTrES (Collaborative Transcoder Edge Services),a flexible prototype testbed that implements all consideredmechanisms.

Year: 2003 | Pages: 205 - 222

ISBN: 9781402022579 | DOI: 10.1007/1-4020-2258-1_14


Ph.D. Thesis
Faenza, Francesco
Abstract

STEM (Science, Technology, Engineering, and Mathematics) education has acquired significant recognition in recent years, encouraging institutions, schools, and organizations to improve student involvement and readiness in these subjects. STEM education camps have evolved as a popular way to develop young minds and cultivate interest and abilities in STEM disciplines. Despite their popularity, there is a crucial gap in the sector. There have been no studies that have provided an evaluation instrument that can be immediately reused or readily adapted for extracurricular activities up to this point. Similarly, no research provides design or development instructions for such a tool. This research project addresses the challenges of designing, evaluating, and analyzing extracurricular STEM activities, specifically in the context of ICT (Information and Communications Technology) education, with a strong emphasis on promoting gender equity and countering stereotypes. Given the 2022 Digital Economy and Society Index (DESI), which predicted a scarcity of ICT professionals in Europe and stressed the critical need to expand ICT skills, it is critical to improve the efficacy of efforts targeted at addressing this skills gap. While programs like EU Code Week work to narrow the digital skills gap, the vast panorama of STEM education activities requires a methodical methodology to evaluate their effectiveness. This research project embarks on a multidisciplinary journey with three major contributions. First, a comprehensive review of existing extracurricular STEM activities is employed to categorize them based on age groups, duration, expenses, and other factors, laying the groundwork for a deeper investigation into the benefits of these projects and the development of assessment tools. Secondly, the research delves into data analysis, focusing on the relationship between camp characteristics and intended outcomes. Regression analysis is used to identify relationships and trends, providing valuable insights for program improvement. Finally, "NextPyter," a novel framework for facilitating collaborative research endeavours, is introduced. This platform incorporates Jupyter Notebook for data analysis into the collaborative file-sharing platform NextCloud, boosting interdisciplinary cooperation and project management and, as a result, improving the research experience. To summarize, this research project addresses the pressing need for comprehensive tools, insights, and collaborative platforms in STEM/ICT education evaluation and analysis. It provides a structured approach to assessing extracurricular initiatives, uncovers valuable insights for program design, and offers a user-friendly platform for data analysis and collaborative research. By bridging these critical gaps, it contributes to the advancement of STEM education and equitable participation in these fields.

Year: 2024 | Pages: n/a - n/a

DOI: n/a

Chapters
de Queiroz, Thiago Alves; Canali, Claudia; Iori, Manuel; Lancellotti, Riccardo
Abstract

Internet of Things (IoT) based applications have recently experienced a remarkable diffusion in many different contexts, such as automotive, e-health, public security, industrial applications, energy, and waste management. These kinds of applications are characterized by geographically distributed sensors that collect data to be processed through algorithms of Artificial Intelligence (AI). Due to the vast amount of data to be processed by AI algorithms and the severe latency requirements of some applications, the emerging Edge Computing paradigm may represent the preferable choice for the supporting infrastructure. However, the design of edge computing infrastructures opens several new issues concerning the allocation of data flows coming from sensors over the edge nodes, and the choice of the number and the location of the edge nodes to be activated. The service placement issue can be modeled through a multi-objective optimization aiming at minimizing two aspects: the response time for data transmission and processing in the sensors-edge-cloud path; the (energy or monetary) cost related to the number of turned on edge nodes. Two heuristics, based on Variable Neighborhood Search and on Genetic Algorithms, are proposed and evaluated over a wide range of scenarios, considering a realistic smart city application with 100 sensors and up to 10 edge nodes. Both heuristics can return practical solutions for the given application. The results indicate a suitable topology for a network-bound scenario requires less enabled edge nodes comparatively with a CPU-bound scenario. In terms of performance gain, the VNS outperformed in almost every condition the GA approach, reaching a performance gain up to almost 40% when the network delay plays a significant role and when the load is higher. Hence, the experimental tests demonstrate that the proposed heuristics are useful to support the design of edge computing infrastructures for modern AI-based applications relying on data collected by geographically distributed IoT sensors.

Year: 2022 | Pages: 1 - 30

ISSN: 2199-1073 | DOI: 10.1007/978-3-030-80821-1_1

Canali, C.; Lancellotti, R.
Abstract

The need for scalable and low-latency architectures that can process large amount of data from geographically distributed sensors and smart devices is a main driver for the popularity of the fog computing paradigm. A typical scenario to explain the fog success is a smart city where monitoring applications collect and process a huge amount of data from a plethora of sensing devices located in streets and buildings. The classical cloud paradigm may provide poor scalability as the amount of data transferred risks the congestion on the data center links, while the high latency, due to the distance of the data center from the sensors, may create problems to latency critical applications (such as the support for autonomous driving). A fog node can act as an intermediary in the sensor-to-cloud communications where pre-processing may be used to reduce the amount of data transferred to the cloud data center and to perform latency-sensitive operations. In this book chapter we address the problem of mapping sensors over the fog nodes with a twofold contribution. First, we introduce a formal model for the mapping model that aims to minimize response time considering both network latency and processing time. Second, we present an evolutionary-inspired heuristic (using Genetic Algorithms) for a fast and accurate resolution of this problem. A thorough experimental evaluation, based on a realistic scenario, provides an insight on the nature of the problem, confirms the viability of the GAs to solve the problem, and evaluates the sensitivity of such heuristic with respect to its main parameters.

Year: 2020 | Pages: 177 - 198

ISSN: 1865-0929 | DOI: 10.1007/978-3-030-49432-2_9

Bicocchi, N.; Canali, C.; Lancellotti, R.
Abstract

Modern cloud data centers typically exploit management strategies to reduce the overall energy consumption. While most of the solutions focus on the energy consumption due to computational elements, the optimization of network-related aspects of a data center is becoming more and more important, considering also the advent of the Software-Defined Network paradigm. However, an enabling step to implement network-aware Virtual Machine (VM) allocation is the knowledge of data exchange patterns. In this way we can place in well-connected hosts (or on the same physical host) the couples of VMs that exchange a large amount of information. Unfortunately, in Infrastructure as a Service data centers, a detailed knowledge on VMs data exchange is seldom available without the deployment of a specialized (and costly) monitoring infrastructure. In this paper, we propose a technique to infer VMs communication patterns starting from input/output network traffic time series of each VM. We discuss both the theoretical aspect of such technique and the design challenges for its implementation. A case study is used to demonstrate the viability of our idea.

Year: 2019 | Pages: 201 - 219

ISSN: 2522-8595 | DOI: 10.1007/978-3-319-92378-9_13

Fabbri, T; Scapolan, A; Bertolotti, F; Canali, C
Abstract

The increasing use of digital technologies in organizational contexts, like collaborative social platforms, has not only changed the way people work but also provided organizations with new and wide ranges of data sources that could be analyzed to enhance organizational- and individual-level outcomes, especially when integrated with more traditional tools. In this study, we explore the relationship between data flows generated by employees on companies’ digital environments and employees’ attitudes measured through surveys. In a sample of 107 employees, we collected data on the number and types of actions performed on the company’s digital collaborative platform over a two-year period and the level of organizational embeddedness (fit, sacrifice, and links dimensions) through two rounds of surveys over the same period. The correlation of the quantity and quality of digital actions with the variation of organizational embeddedness over the same period shows that workers who engaged in more activities on the digital platform also experienced an increase in their level of organizational embeddedness mainly in the fit dimension. In addition, the higher the positive variation of fit, the more employees performed both active and passive digital actions. Finally, the higher the variation of organizational embeddedness, the more employees performed networking digital behaviors.

Year: 2019 | Pages: 161 - 175

ISSN: 1877-6361 | DOI: 10.1108/S1877-636120190000023012

Canali, C.; Addabbo, T.; Moumtzi, V.
Abstract

The extremely low rates of females compared to men, enrolled at Computer Sciences (CS) and Information Systems Universities result not only in a massive loss of talent for companies and economies but also perpetuate gaps in gender inequality in the ICT field. To face this, Universities and Research Organizations are gradually taking initiatives to address such gender imbalance, trying to intervene and raise the awareness on a complex set of rooted cultural/societal gender stereotypes, including gender bias and linking ICT with masculinity that are permeating early school education, STEM teaching practices and parents’ attitudes. This approach is based on several studies on the current students that highlight how female bachelor students in CS have lower levels of self-confidence compared to their male counterparts which can negatively impact on their plans to continue their studies. Towards this direction, the Horizon 2020 EQUAL-IST (Gender Equality Plans for Information Sciences and Technology Research Institutions) project supports six Universities across Europe (Italy, Lithuania, Germany, Ukraine, Finland, Portugal) to design and implement actions towards gender equality, with a specific focus on the ICT/IST area. The Universities have settled up several concrete initiatives to attract female students towards ICT studies. Specifically, this paper presents the best practice implemented at the University of Modena and Reggio Emilia, (UniMORE) the Summer Camp namely Ragazze Digitali (Digital Girls). The summer camp offers to female students of third and fourth grade of the high schools a first-hand experience based on a learn-by-doing approach to coding applied to creative and innovative fields, as well as inspiring female role models from the academia and the industry. For its scope, nature (free for the girls to participate) and duration (four entire weeks), the Summer Camp Ragazze Digitali represents a unique experience not only in Italy but also in Europe and, at the best of our knowledge, in the world. The paper describes the Summer Camp experience, highlighting the impacts of this experience on the female students, with particular attention to changed attitudes and plans for their future studies and careers.

Year: 2019 | Pages: 121 - 128

ISBN: 9781912764167 | DOI: n/a

Addabbo, T.; Canali, C.; Facchinetti, G.; Grandi, Alessandro; Pirotti, T.
Abstract

Gender inequality in research and innovation is well documented (European Commission, 2016) and tools to measure and monitor it have been proposed and tested within EU funded projects as GenderTime (Badaloni & Perini, 2016) or Effective gender equality in research and academia (EGERA) (http://www.egera.eu/). The evaluation proposal at the heart of this contribution has been developed within EQUAL-IST project (Gender Equality Plans for Information Sciences and Technology Research Institutions) funded under the European Union's Horizon 2020 research and innovation programme that aims at introducing structural changes in research organizations to enhance gender equality within Information System and Technology Institutions. The dimensions and indicators used to measure gender equality are consistent to those that the literature on gender equality in research and academic institutions have shown to be significant. Our contribution shows an innovation in the choice on how to measure gender equality by using Fuzzy Multi Criteria Decision Analysis (FMCDA). We propose a Fuzzy Expert System, a cognitive model that, by replicating the expert way of learning and thinking, allows to formalize qualitative concepts and to reach a synthetic measure of the institution's gender equality (ranging from 0 to 1 increasing with gender equality achievements) that can be then disentangled in its different dimensions. The latter characteristic of the model that we propose can be fruitfully used by policy makers and Equal opportunity officers in order to detect and address the critical elements in the organization and carry out changes to improve gender equality. A first application of the model has been experimented within the EQUAL-IST project and is available for other universities and research institutions wishing to obtain an assessment of their organization in terms of gender equality. Further developments of the model, together with its wider implementation, include the assessment by using fuzzy logic of gender equality policies and institutional factors affecting gender equality within the institution.

Year: 2018 | Pages: 1 - 9

ISBN: 9781911218777 | DOI: n/a

Canali, Claudia; Colajanni, Michele; Lancellotti, Riccardo
Abstract

The widespread diffusion and technological improvements of wireless networks and portable devices are facilitating mobile accesses to the Web and Web 2.0 services. The emerging Mobile Web 2.0 scenario still requires appropriate solutions to guarantee user interactions that are comparable with present levels of services. In this chapter we classify the most important services for Mobile Web 2.0, and we identify the key functions that are required to support each category of Mobile Web 2.0 services. We discuss some possible technological solutions to implement these functions at the client and at the server level, and we identify some research issues that are still open.

Year: 2011 | Pages: n/a - n/a

ISBN: 9781439800829 | DOI: n/a

Canali, Claudia; Cardellini, V.; Colajanni, Michele; Lancellotti, Riccardo
Abstract

This chapter explores the issues of content delivery through CDNs, with a specialfocus on the delivery of dynamically generated and personalized content. Wedescribe the main functions of a modern Web system and we discuss how the deliveryperformance and scalability can be improved by replicating the functions ofa typical multi-tier Web system over the nodes of a CDN. For each solution, wepresent the state of the art in the research literature, as well as the available industrystandardproducts adopting the solution. Furthermore, we discuss the pros and consof each CDN-based replication solution, pointing out the scenarios that provides thebest benefits and the cases where it is detrimental to performance.

Year: 2008 | Pages: 105 - 126

ISSN: 1876-1100 | DOI: 10.1007/978-3-540-77887-5_4

Canali, Claudia; Rabinovich, M; Xiao, Z.
Abstract

With the growing demand for computing resources and network capacity, roviding scalable and reliable computing service on the Internet becomes a challenging problem. Recently, much attention has been paid to the “utility computing” concept that aims to provide computing as a utility service similar to water and electricity. While the concept is very challenging in general, we focus our attention in this chapter to a restrictive environment - Web applications. Given the ubiquitous use of Web applications on the Internet, this environment is rich and important enough to warrant careful research. This chapter describes the approaches and challenges related to the architecture and algorithm design in building such a computing platform.

Year: 2005 | Pages: 131 - 152

ISBN: 9780387243566 | DOI: n/a