Intelligent Transportation Systems (ITS) are systems that consist on an complex set of technologies that are applied to road agents, aiming to provide a more efficient and safe usage of the roads. The aspect of safety is particularly important for Vulnerable Road Users (VRUs), which are entities for whose implementation of automatic safety solutions is challenging for their agility and hard to anticipate behavior. However, the usage of ML techniques on Vehicle to Anything (V2X) data has the potential to implement such systems. This paper proposes a VRUs (motorcycles) accident prediction system by using Long Short-Term Memorys (LSTMs) on top of communication data that is generated using the VEINS simulation framework (pairing SUMO and ns-3). Results show that the proposed system is able to predict 96% of the accidents on Scenario A (with a 4.53s Average Prediction Time and a 41% Correct Decision Percentage (CDP) - 78 False Positives (FP)) and 95% on Scenario B (with a 4.44s Average Prediction Time and a 43% CDP - 68 FP).
{"title":"Machine Learning for VRUs accidents prediction using V2X data","authors":"B. Ribeiro, M. J. Nicolau, Alexandre J. T. Santos","doi":"10.1145/3555776.3578263","DOIUrl":"https://doi.org/10.1145/3555776.3578263","url":null,"abstract":"Intelligent Transportation Systems (ITS) are systems that consist on an complex set of technologies that are applied to road agents, aiming to provide a more efficient and safe usage of the roads. The aspect of safety is particularly important for Vulnerable Road Users (VRUs), which are entities for whose implementation of automatic safety solutions is challenging for their agility and hard to anticipate behavior. However, the usage of ML techniques on Vehicle to Anything (V2X) data has the potential to implement such systems. This paper proposes a VRUs (motorcycles) accident prediction system by using Long Short-Term Memorys (LSTMs) on top of communication data that is generated using the VEINS simulation framework (pairing SUMO and ns-3). Results show that the proposed system is able to predict 96% of the accidents on Scenario A (with a 4.53s Average Prediction Time and a 41% Correct Decision Percentage (CDP) - 78 False Positives (FP)) and 95% on Scenario B (with a 4.44s Average Prediction Time and a 43% CDP - 68 FP).","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80725191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hypervisor vulnerabilities cause severe security issues in multi-tenant cloud environments because hypervisors guarantee isolation among virtual machines (VMs). Unfortunately, hypervisor vulnerabilities are continuously reported, and device emulation in hypervisors is one of the hotbeds because of its complexity. Although applying patches to fix the vulnerabilities is a common way to protect hypervisors, it takes time to develop the patches because the internal knowledge on hypervisors is mandatory. The hypervisors are exposed to the threat of the vulnerabilities exploitation until the patches are released. This paper proposes Nioh-PT, a framework for filtering illegal I/O requests, which reduces the vulnerability windows of the device emulation. The key insight of Nioh-PT is that malicious I/O requests contain illegal I/O sequences, a series of I/O requests that are not issued during normal I/O operations. Nioh-PT filters out those illegal I/O sequences and protects device emulators against the exploitation. The filtering rules, which define illegal I/O sequences for virtual device exploits, can be specified without the knowledge on the internal implementation of hypervisors and virtual devices, because Nioh-PT is decoupled from hypervisors and the device emulators. We develop 11 filtering rules against four real-world vulnerabilities in device emulation, including CVE-2015-3456 (VENOM) and CVE-2016-7909. We demonstrate that Nioh-PT with these filtering rules protects against the virtual device exploits and introduces negligible overhead by up to 8% for filesystem and storage benchmarks.
{"title":"Nioh-PT: Virtual I/O Filtering for Agile Protection against Vulnerability Windows","authors":"Mana Senuki, Ken-Ichi Ishiguro, K. Kono","doi":"10.1145/3555776.3577687","DOIUrl":"https://doi.org/10.1145/3555776.3577687","url":null,"abstract":"Hypervisor vulnerabilities cause severe security issues in multi-tenant cloud environments because hypervisors guarantee isolation among virtual machines (VMs). Unfortunately, hypervisor vulnerabilities are continuously reported, and device emulation in hypervisors is one of the hotbeds because of its complexity. Although applying patches to fix the vulnerabilities is a common way to protect hypervisors, it takes time to develop the patches because the internal knowledge on hypervisors is mandatory. The hypervisors are exposed to the threat of the vulnerabilities exploitation until the patches are released. This paper proposes Nioh-PT, a framework for filtering illegal I/O requests, which reduces the vulnerability windows of the device emulation. The key insight of Nioh-PT is that malicious I/O requests contain illegal I/O sequences, a series of I/O requests that are not issued during normal I/O operations. Nioh-PT filters out those illegal I/O sequences and protects device emulators against the exploitation. The filtering rules, which define illegal I/O sequences for virtual device exploits, can be specified without the knowledge on the internal implementation of hypervisors and virtual devices, because Nioh-PT is decoupled from hypervisors and the device emulators. We develop 11 filtering rules against four real-world vulnerabilities in device emulation, including CVE-2015-3456 (VENOM) and CVE-2016-7909. We demonstrate that Nioh-PT with these filtering rules protects against the virtual device exploits and introduces negligible overhead by up to 8% for filesystem and storage benchmarks.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76072650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Banse, Immanuel Kunz, Nico Haas, Angelika Schneider
Continuous certification of cloud services requires a high degree of automation in collecting and evaluating evidences. Prior approaches to this topic are often specific to a cloud provider or a certain certification catalog. This makes it costly and complex to achieve conformance to multiple certification schemes and covering multi-cloud solutions. In this paper, we present a novel approach to continuous certification which is scheme- and vendor-independent. Leveraging an ontology of cloud resources and their security features, we generalize vendor- and scheme-specific terminology into a new model of so-called semantic evidence. In combination with generalized metrics that we elicited out of requirements from the EUCS and the CCMv4, we present a framework for the collection and assessment of such semantic evidence across multiple cloud providers. This allows to conduct continuous cloud certification while achieving re-usability of metrics and evidences in multiple certification schemes. The performance benchmark of the framework's prototype implementation shows that up to 200,000 evidences can be processed in less than a minute, making it suitable for short time intervals used in continuous certification.
{"title":"A Semantic Evidence-based Approach to Continuous Cloud Service Certification","authors":"Christian Banse, Immanuel Kunz, Nico Haas, Angelika Schneider","doi":"10.1145/3555776.3577600","DOIUrl":"https://doi.org/10.1145/3555776.3577600","url":null,"abstract":"Continuous certification of cloud services requires a high degree of automation in collecting and evaluating evidences. Prior approaches to this topic are often specific to a cloud provider or a certain certification catalog. This makes it costly and complex to achieve conformance to multiple certification schemes and covering multi-cloud solutions. In this paper, we present a novel approach to continuous certification which is scheme- and vendor-independent. Leveraging an ontology of cloud resources and their security features, we generalize vendor- and scheme-specific terminology into a new model of so-called semantic evidence. In combination with generalized metrics that we elicited out of requirements from the EUCS and the CCMv4, we present a framework for the collection and assessment of such semantic evidence across multiple cloud providers. This allows to conduct continuous cloud certification while achieving re-usability of metrics and evidences in multiple certification schemes. The performance benchmark of the framework's prototype implementation shows that up to 200,000 evidences can be processed in less than a minute, making it suitable for short time intervals used in continuous certification.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79252397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ben Crulis, Barthélémy Serres, Cyril de Runz, G. Venturini
Current artificial neural networks are trained with parameters encoded as floating point numbers that occupy lots of memory space at inference time. Due to the increase in size of deep learning models, it is becoming very difficult to consider training and using artificial neural networks on edge devices such as smartphones. Binary neural networks promise to reduce the size of deep neural network models as well as increasing inference speed while decreasing energy consumption and so allow the deployment of more powerful models on edge devices. However, binary neural networks are still proven to be difficult to train using the backpropagation based gradient descent scheme. We propose to adapt to binary neural networks two training algorithms considered as promising alternatives to backpropagation but for continuous neural networks. We provide experimental comparative results for image classification including the backpropagation baseline on the MNIST, Fashion MNIST and CIFAR-10 datasets in both continuous and binary settings. The results demonstrate that binary neural networks can not only be trained using alternative algorithms to backpropagation but can also be shown to lead better performance and a higher tolerance to the presence or absence of batch normalization layers.
{"title":"Are alternatives to backpropagation useful for training Binary Neural Networks? An experimental study in image classification","authors":"Ben Crulis, Barthélémy Serres, Cyril de Runz, G. Venturini","doi":"10.1145/3555776.3577674","DOIUrl":"https://doi.org/10.1145/3555776.3577674","url":null,"abstract":"Current artificial neural networks are trained with parameters encoded as floating point numbers that occupy lots of memory space at inference time. Due to the increase in size of deep learning models, it is becoming very difficult to consider training and using artificial neural networks on edge devices such as smartphones. Binary neural networks promise to reduce the size of deep neural network models as well as increasing inference speed while decreasing energy consumption and so allow the deployment of more powerful models on edge devices. However, binary neural networks are still proven to be difficult to train using the backpropagation based gradient descent scheme. We propose to adapt to binary neural networks two training algorithms considered as promising alternatives to backpropagation but for continuous neural networks. We provide experimental comparative results for image classification including the backpropagation baseline on the MNIST, Fashion MNIST and CIFAR-10 datasets in both continuous and binary settings. The results demonstrate that binary neural networks can not only be trained using alternative algorithms to backpropagation but can also be shown to lead better performance and a higher tolerance to the presence or absence of batch normalization layers.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77059636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A microservices-based architecture is a set of small components that communicate with each other using a programming language-independent API [1]. It has been gaining popularity for more than a decade. One of its advantages is greater agility in software development and following modern, agile software development practices [2]. The article presents an experimental study. Two applications with the same business logic and different architecture were developed. Both applications were tested using prepared test cases on the local computer of one of the authors and the Microsoft Azure platform. The results were collected and compared using the JMeter tool. In almost all cases, the monolithic architecture proved to be more efficient. The comparable performance of both architectures occurred when queries were handled by the business logic layer for a relatively long time.
{"title":"Differences in performance, scalability, and cost of using microservice and monolithic architecture","authors":"Przemysław Jatkiewicz, Szymon Okrój","doi":"10.1145/3555776.3578725","DOIUrl":"https://doi.org/10.1145/3555776.3578725","url":null,"abstract":"A microservices-based architecture is a set of small components that communicate with each other using a programming language-independent API [1]. It has been gaining popularity for more than a decade. One of its advantages is greater agility in software development and following modern, agile software development practices [2]. The article presents an experimental study. Two applications with the same business logic and different architecture were developed. Both applications were tested using prepared test cases on the local computer of one of the authors and the Microsoft Azure platform. The results were collected and compared using the JMeter tool. In almost all cases, the monolithic architecture proved to be more efficient. The comparable performance of both architectures occurred when queries were handled by the business logic layer for a relatively long time.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74103822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou
Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.
{"title":"Federated Hyperparameter Optimisation with Flower and Optuna","authors":"J. Parra-Ullauri, Xunzheng Zhang, A. Bravalheri, R. Nejabati, D. Simeonidou","doi":"10.1145/3555776.3577847","DOIUrl":"https://doi.org/10.1145/3555776.3577847","url":null,"abstract":"Federated learning (FL) is an emerging distributed machine learning technique in which multiple clients collaborate to learn a model under the management of a central server. An FL system depends on a set of initial conditions (i.e., hyperparameters) that affect the system's performance. However, defining a good choice of hyperparameters for the central server and clients is a challenging problem. Hyperparameter tuning in FL often requires manual or automated searches to find optimal values. Nonetheless, a noticeable limitation is the high cost of algorithm evaluation for server and client models, making the tuning process computationally expensive and time-consuming. We propose an implementation based on integrating the FL framework Flower, and the prime optimisation software Optuna for automated and efficient hyperparameter optimisation (HPO) in FL. Through this combination, it is possible to tune hyperparameters in both clients and server online, aiming to find the optimal values at runtime. We introduce the HPO factor to describe the number of rounds that the HPO will take place, and the HPO rate that defines the frequency for updating the hyperparameters and can be used for pruning. The HPO is managed by the FL server which updates clients' hyperparameters, with an HPO rate, using state-of-the-art optimisation algorithms enabled by Optuna. We tested our approach by updating multiple client models simultaneously in popular image recognition datasets which produced promising results compared to baselines.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72646118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Generic Integral Tunnel Design (GITO) contains generic models for the tunnel control systems of Rijkswaterstaat, part of the Dutch Ministry of Infrastructure and Water Management. A formal verification of these models advances the safety and reliability of GITO derived tunnel control systems. In this paper, the first known large-scale formalisation of tunnel control systems is presented which transforms GITO models to the formal specification language mCRL2. This transformation is applied to two sub-systems of the GITO to analyse the correctness of the supplied models. In this formal analysis, several deficiencies in the specifications and faults in the existing models are revealed and verified solutions are proposed. Some of the presented faults even find their origin in the legally required standards.
{"title":"A formal analysis of Dutch Generic Integral Tunnel Design models","authors":"Kevin H. J. Jilissen, P. Dieleman, J. F. Groote","doi":"10.1145/3555776.3577786","DOIUrl":"https://doi.org/10.1145/3555776.3577786","url":null,"abstract":"The Generic Integral Tunnel Design (GITO) contains generic models for the tunnel control systems of Rijkswaterstaat, part of the Dutch Ministry of Infrastructure and Water Management. A formal verification of these models advances the safety and reliability of GITO derived tunnel control systems. In this paper, the first known large-scale formalisation of tunnel control systems is presented which transforms GITO models to the formal specification language mCRL2. This transformation is applied to two sub-systems of the GITO to analyse the correctness of the supplied models. In this formal analysis, several deficiencies in the specifications and faults in the existing models are revealed and verified solutions are proposed. Some of the presented faults even find their origin in the legally required standards.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74734020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hosting popular Meetup events is one of the primary objectives of the Meetup organizers. This paper explores the possibility of inviting a few key influential members to attend Meetup events, who may further influence their followers to attend and boost the popularity of those Meetup events. Importantly, our pilot study reveals that topics of the Meetup events play a key role behind the effectiveness of the influential members. Leveraging this observation, in this paper, we develop Topic Aware Influencer Detection (TAID) heuristics, which recommends (i) top-k influential members Ik, and (ii) top-b influence badges Rb based on the topical interest of a Meetup group. This indicates that Ik. will be most effective in influencing the Meetup members to attend the events hosted on topic Rb. TAID heuristics contains two major blocks (a) influence propagation graph construction, and (b) recommendation generation. Rigorous evaluation of TAID on 1447 Meetup groups with three different topics reveals that TAID comfortably outperforms the baselines by influencing (on average) 15% more Meetup members.
{"title":"Topic Aware Influential Member Detection in Meetup","authors":"Arpan Dam, Surya Kumar, Debjyoti Bhattacharjee, Sayan D. Pathak, Bivas Mitra","doi":"10.1145/3555776.3577684","DOIUrl":"https://doi.org/10.1145/3555776.3577684","url":null,"abstract":"Hosting popular Meetup events is one of the primary objectives of the Meetup organizers. This paper explores the possibility of inviting a few key influential members to attend Meetup events, who may further influence their followers to attend and boost the popularity of those Meetup events. Importantly, our pilot study reveals that topics of the Meetup events play a key role behind the effectiveness of the influential members. Leveraging this observation, in this paper, we develop Topic Aware Influencer Detection (TAID) heuristics, which recommends (i) top-k influential members Ik, and (ii) top-b influence badges Rb based on the topical interest of a Meetup group. This indicates that Ik. will be most effective in influencing the Meetup members to attend the events hosted on topic Rb. TAID heuristics contains two major blocks (a) influence propagation graph construction, and (b) recommendation generation. Rigorous evaluation of TAID on 1447 Meetup groups with three different topics reveals that TAID comfortably outperforms the baselines by influencing (on average) 15% more Meetup members.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84482779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ademar França de Sousa Neto, F. Ramos, D. Albuquerque, Emanuel Dantas, M. Perkusich, H. Almeida, A. Perkusich
Agile Software Development (ASD) implicitly manages risks through, for example, its short development cycles (i.e., iterations). The absence of explicit risk management activities in ASD might be problematic since this approach cannot handle all types of risks, might cause risks (e.g., technical debt), and does not promote knowledge reuse throughout an organization. Thus, there is a need to bring discipline to agile risk management. This study focuses on bringing such discipline to organizations that conduct multiple projects to develop software products using ASD, specifically, the Scrum framework, which is the most popular way of adopting ASD. For this purpose, we developed a novel solution that was articulated in partnership with an industry partner. It is a process to complement the Scrum framework to use a recommender system that recommends risks and response plans for a target project, given the risks registered for similar projects in an organization's risk memory (i.e., database). We evaluated the feasibility of the proposed recommender system solution using pre-collected datasets from 17 projects from our industry partner. Since we used the KNN algorithm, we focused on finding the best configuration of k (i.e., the number of neighbors) and the similarity measure. As a result, the configuration with the best results had k = 6 (i.e., six neighbors) and used the Manhattan similarity measure, achieving precision = 45%; recall = 90%; and F1-score = 58%. The results show that the proposed recommender system can assist Scrum Teams in identifying risks and response plans, and it is promising to aid decision-making in Scrum-based projects. Thus, we concluded that our proposed recommender system-based risk management process is promising for helping Scrum Teams address risks more efficiently.
{"title":"Towards a Recommender System-based Process for Managing Risks in Scrum Projects","authors":"Ademar França de Sousa Neto, F. Ramos, D. Albuquerque, Emanuel Dantas, M. Perkusich, H. Almeida, A. Perkusich","doi":"10.1145/3555776.3577748","DOIUrl":"https://doi.org/10.1145/3555776.3577748","url":null,"abstract":"Agile Software Development (ASD) implicitly manages risks through, for example, its short development cycles (i.e., iterations). The absence of explicit risk management activities in ASD might be problematic since this approach cannot handle all types of risks, might cause risks (e.g., technical debt), and does not promote knowledge reuse throughout an organization. Thus, there is a need to bring discipline to agile risk management. This study focuses on bringing such discipline to organizations that conduct multiple projects to develop software products using ASD, specifically, the Scrum framework, which is the most popular way of adopting ASD. For this purpose, we developed a novel solution that was articulated in partnership with an industry partner. It is a process to complement the Scrum framework to use a recommender system that recommends risks and response plans for a target project, given the risks registered for similar projects in an organization's risk memory (i.e., database). We evaluated the feasibility of the proposed recommender system solution using pre-collected datasets from 17 projects from our industry partner. Since we used the KNN algorithm, we focused on finding the best configuration of k (i.e., the number of neighbors) and the similarity measure. As a result, the configuration with the best results had k = 6 (i.e., six neighbors) and used the Manhattan similarity measure, achieving precision = 45%; recall = 90%; and F1-score = 58%. The results show that the proposed recommender system can assist Scrum Teams in identifying risks and response plans, and it is promising to aid decision-making in Scrum-based projects. Thus, we concluded that our proposed recommender system-based risk management process is promising for helping Scrum Teams address risks more efficiently.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73470263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low-power and Lossy Networks (LLN) are utilised for numerous Internet of Things (IoT) applications. IEEE has specified the Time-slotted Channel Hopping (TSCH) Media Access Control (MAC) to target the needs of Industrial IoT. TSCH supports deterministic communications over unreliable wireless environments and balances energy, bandwidth and latency. Furthermore, the Minimal 6TiSCH configuration defined Routing Protocol for Low power and Lossy networks (RPL) with the Objective Function 0 (OF0). Inherent factors from RPL operation, such as joining procedure, parent switching, and trickle timer fluctuations, may introduce overhead and overload the network with control messages. The application and RPL control data may lead to an unpredicted networking bottleneck, potentially causing network instability. Hence, a stable RPL operation contributes to a healthy TSCH operation. In this paper, we explore TSCH MAC and RPL metrics to identify factors that lead to performance degradation and specify indicators to anticipate network disorders towards increasing Industrial IoT reliability. A TSCH Schedule Function might employ the identified aspects to foresee disturbances, proactively allocate the proper amount of cells, and avoid networking congestion.
{"title":"Towards the support of Industrial IoT applications with TSCH","authors":"Ivanilson F. Vieira Júnior, M. Curado, J. Granjal","doi":"10.1145/3555776.3577752","DOIUrl":"https://doi.org/10.1145/3555776.3577752","url":null,"abstract":"Low-power and Lossy Networks (LLN) are utilised for numerous Internet of Things (IoT) applications. IEEE has specified the Time-slotted Channel Hopping (TSCH) Media Access Control (MAC) to target the needs of Industrial IoT. TSCH supports deterministic communications over unreliable wireless environments and balances energy, bandwidth and latency. Furthermore, the Minimal 6TiSCH configuration defined Routing Protocol for Low power and Lossy networks (RPL) with the Objective Function 0 (OF0). Inherent factors from RPL operation, such as joining procedure, parent switching, and trickle timer fluctuations, may introduce overhead and overload the network with control messages. The application and RPL control data may lead to an unpredicted networking bottleneck, potentially causing network instability. Hence, a stable RPL operation contributes to a healthy TSCH operation. In this paper, we explore TSCH MAC and RPL metrics to identify factors that lead to performance degradation and specify indicators to anticipate network disorders towards increasing Industrial IoT reliability. A TSCH Schedule Function might employ the identified aspects to foresee disturbances, proactively allocate the proper amount of cells, and avoid networking congestion.","PeriodicalId":42971,"journal":{"name":"Applied Computing Review","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87524582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}