Pub Date : 2024-06-27DOI: 10.1007/s10922-024-09839-3
Hélio Pesanhane, Wesley R. Bezerra, Fernando Koch, Carlos Westphall
In Agrifood scenarios, where farmers need to ensure that their produce is safely produced, transported, and stored, they rely on a network of IoT devices to monitor conditions such as temperature and humidity throughout the supply chain. However, managing this large-scale IoT environment poses significant challenges, including transparency, traceability, data tampering, and accountability. Blockchain is portrayed as a technology capable of solving the problems of transparency, traceability, data tampering, and accountability, which are key issues in the AgriFood supply chain. Nonetheless, there are challenges related to managing a large-scale IoT environment using the current security, authentication, and access control solutions. To address these issues, we introduce an architecture in which IoT devices record data and store them in the participant’s cloud after validation by endorsing peers following an attribute-based access control (ABAC) policy. This policy allows IoT device owners to specify the physical quantities, value ranges, time periods, and types of data that each device is permitted to measure and transmit. Authorized users can access this data under the ABAC policy contract. Our solution demonstrates efficiency, with 50% of IoT data write requests completed in less than 0.14 s using solo ordering service and 2.5 s with raft ordering service. Data retrieval shows an average latency between 0.34 and 0.57 s and a throughput ranging from 124.8 to 9.9 Transactions Per Second (TPS) for data sizes between 8 and 512 kilobytes. This architecture not only enhances the management of IoT environments in the AgriFood supply chain but also ensures data privacy and security.
{"title":"Distributed AgriFood Supply Chains","authors":"Hélio Pesanhane, Wesley R. Bezerra, Fernando Koch, Carlos Westphall","doi":"10.1007/s10922-024-09839-3","DOIUrl":"https://doi.org/10.1007/s10922-024-09839-3","url":null,"abstract":"<p>In Agrifood scenarios, where farmers need to ensure that their produce is safely produced, transported, and stored, they rely on a network of IoT devices to monitor conditions such as temperature and humidity throughout the supply chain. However, managing this large-scale IoT environment poses significant challenges, including transparency, traceability, data tampering, and accountability. Blockchain is portrayed as a technology capable of solving the problems of transparency, traceability, data tampering, and accountability, which are key issues in the AgriFood supply chain. Nonetheless, there are challenges related to managing a large-scale IoT environment using the current security, authentication, and access control solutions. To address these issues, we introduce an architecture in which IoT devices record data and store them in the participant’s cloud after validation by endorsing peers following an attribute-based access control (ABAC) policy. This policy allows IoT device owners to specify the physical quantities, value ranges, time periods, and types of data that each device is permitted to measure and transmit. Authorized users can access this data under the ABAC policy contract. Our solution demonstrates efficiency, with 50% of IoT data write requests completed in less than 0.14 s using solo ordering service and 2.5 s with raft ordering service. Data retrieval shows an average latency between 0.34 and 0.57 s and a throughput ranging from 124.8 to 9.9 Transactions Per Second (TPS) for data sizes between 8 and 512 kilobytes. This architecture not only enhances the management of IoT environments in the AgriFood supply chain but also ensures data privacy and security.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s10922-024-09832-w
Van Tong, Cuong Dao, Hai-Anh Tran, Truong X. Tran, Sami Souihi
Smart contracts are decentralized applications that hold a pivotal role in blockchain-based systems. Smart contracts are composed of error-prone programming languages, so it is affected by many vulnerabilities (e.g., time dependence, outdated version, etc.), which can result in a substantial economic loss within the blockchain ecosystem. Therefore, many vulnerability detection tools are designed to detect the vulnerabilities in smart contracts such as Slither, Mythrill and so forth. However, these tools require high processing time and cannot achieve good accuracy with complex smart contracts nowadays. Consequently, many studies have shifted towards using Deep Learning (DL) techniques, which consider bytecode to determine vulnerabilities in smart contracts. However, these mechanisms reveal three main limitations. First, these mechanisms focus on multi-class problems, assuming that a given smart contract contains only a single vulnerability while the smart contract can contain more than one vulnerability. Second, these approaches encounter ineffective word embedding with large input sequences. Third, the learning model in these mechanisms is forced to classify into one of pre-defined labels even when it cannot make decisions accurately, leading to misclassifications. Therefore, in this paper, we propose a multi-label vulnerability classification mechanism using a language model. To deal with the ineffective word embedding, the proposed mechanism not only takes into account the implicit features derived from the language models (e.g., SecBERT, etc.) but also auxiliary features extracted from other word embedding techniques (e.g., TF-IDF, etc.). Besides, a trustworthy neural network model is proposed to reduce the misclassification rate of vulnerability classification. In detail, an additional neuron is added to the output of the model to indicate whether the model is able to make decisions accurately or not. The experimental results illustrate that the trustworthy model outperforms benchmarks (e.g., binary relevance, label powerset, classifier chain, etc.), achieving up to approximately 98% f1-score while requiring low execution time with 26 ms.
{"title":"Enhancing BERT-Based Language Model for Multi-label Vulnerability Detection of Smart Contract in Blockchain","authors":"Van Tong, Cuong Dao, Hai-Anh Tran, Truong X. Tran, Sami Souihi","doi":"10.1007/s10922-024-09832-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09832-w","url":null,"abstract":"<p>Smart contracts are decentralized applications that hold a pivotal role in blockchain-based systems. Smart contracts are composed of error-prone programming languages, so it is affected by many vulnerabilities (e.g., time dependence, outdated version, etc.), which can result in a substantial economic loss within the blockchain ecosystem. Therefore, many vulnerability detection tools are designed to detect the vulnerabilities in smart contracts such as Slither, Mythrill and so forth. However, these tools require high processing time and cannot achieve good accuracy with complex smart contracts nowadays. Consequently, many studies have shifted towards using Deep Learning (DL) techniques, which consider bytecode to determine vulnerabilities in smart contracts. However, these mechanisms reveal three main limitations. First, these mechanisms focus on multi-class problems, assuming that a given smart contract contains only a single vulnerability while the smart contract can contain more than one vulnerability. Second, these approaches encounter ineffective word embedding with large input sequences. Third, the learning model in these mechanisms is forced to classify into one of pre-defined labels even when it cannot make decisions accurately, leading to misclassifications. Therefore, in this paper, we propose a multi-label vulnerability classification mechanism using a language model. To deal with the ineffective word embedding, the proposed mechanism not only takes into account the implicit features derived from the language models (e.g., SecBERT, etc.) but also auxiliary features extracted from other word embedding techniques (e.g., TF-IDF, etc.). Besides, a trustworthy neural network model is proposed to reduce the misclassification rate of vulnerability classification. In detail, an additional neuron is added to the output of the model to indicate whether the model is able to make decisions accurately or not. The experimental results illustrate that the trustworthy model outperforms benchmarks (e.g., binary relevance, label powerset, classifier chain, etc.), achieving up to approximately 98% f1-score while requiring low execution time with 26 ms.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s10922-024-09838-4
Wilson de Souza, Taufik Abrão
In this work, we investigate Reconfigurable Intelligent Surface (RIS)-aided Full-Duplex (FD)-Simultaneous Wireless Information Power Transfer (SWIPT)-Cooperative non-Orthogonal Multiple Access (C-NOMA) consisting of two paired devices. The device with better channel conditions ((D_1)) is designated to act as a FD relay to assist the device with poor channel conditions ((D_2)). We assume that (D_1) does not use its own battery energy to cooperate but harvests energy by utilizing SWIPT. A practical non-linear Energy Harvesting (EH) model is considered. We first approximate the harvested power as a Gamma Random Variable (RV) via the Moment Matching (MM) technique. This allows us to derive analytical expressions for Outage Probability (OP) and ergodic rate (ER) that are simple to compute yet accurate for a wide range of system parameters, such as EH coefficients and residual Self-Interference (SI) levels, being extensively validated by numerical simulations. The OP and ER expressions reveal how important it is to mitigate the SI in the FD relay mode since, for reasonable values of residual SI coefficient, its detrimental effect on the system performance, is extremely noticeable. Also, numerical results reveal that increasing the number of RIS elements can benefit the cooperative system much more than the non-cooperative one.
在这项工作中,我们研究了由两个配对设备组成的可重构智能表面(RIS)辅助全双工(FD)-同步无线信息功率传输(SWIPT)-合作非正交多址(C-NOMA)。信道条件较好的设备((D_1))被指定为 FD 中继器,协助信道条件较差的设备((D_2))。我们假设 (D_1) 不使用自己的电池能量进行合作,而是利用 SWIPT 收集能量。我们考虑了一个实用的非线性能量收集(EH)模型。我们首先通过矩匹配(Moment Matching,MM)技术将收获的能量近似为伽马随机变量(Ramma Random Variable,RV)。这样,我们就能推导出停电概率 (OP) 和遍历率 (ER) 的分析表达式,这些表达式易于计算,但对于 EH 系数和残余自干扰 (SI) 水平等各种系统参数而言却非常精确,并通过数值模拟进行了广泛验证。OP 和 ER 表达式揭示了在 FD 中继模式下减轻 SI 的重要性,因为对于合理的残余 SI 系数值,其对系统性能的不利影响极为明显。此外,数值结果表明,增加 RIS 单元的数量对合作系统的益处远远大于非合作系统。
{"title":"RIS-aided Cooperative FD-SWIPT-NOMA Performance Over Nakagami-m Channels","authors":"Wilson de Souza, Taufik Abrão","doi":"10.1007/s10922-024-09838-4","DOIUrl":"https://doi.org/10.1007/s10922-024-09838-4","url":null,"abstract":"<p>In this work, we investigate Reconfigurable Intelligent Surface (RIS)-aided Full-Duplex (FD)-Simultaneous Wireless Information Power Transfer (SWIPT)-Cooperative non-Orthogonal Multiple Access (C-NOMA) consisting of two paired devices. The device with better channel conditions (<span>(D_1)</span>) is designated to act as a FD relay to assist the device with poor channel conditions (<span>(D_2)</span>). We assume that <span>(D_1)</span> does not use its own battery energy to cooperate but harvests energy by utilizing SWIPT. A practical non-linear Energy Harvesting (EH) model is considered. We first approximate the harvested power as a Gamma Random Variable (RV) via the Moment Matching (MM) technique. This allows us to derive analytical expressions for Outage Probability (OP) and ergodic rate (ER) that are simple to compute yet accurate for a wide range of system parameters, such as EH coefficients and residual Self-Interference (SI) levels, being extensively validated by numerical simulations. The OP and ER expressions reveal how important it is to mitigate the SI in the FD relay mode since, for reasonable values of residual SI coefficient, its detrimental effect on the system performance, is extremely noticeable. Also, numerical results reveal that increasing the number of RIS elements can benefit the cooperative system much more than the non-cooperative one.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10922-024-09824-w
Xianfeng Li, Haoran Sun, Yan Huang
Software-defined networks (SDN) rely on flow tables to forward packets from different flows with different policies. To speed up packet forwarding, the rules in the flow table should reside in the forwarding plane as much as possible to reduce the chances of consulting the SDN controller, which is a slow process. The rules are usually cached in the forwarding plane with a Ternary Content Addressable Memory (TCAM) device. However, a TCAM has limited capacity, because it is expensive and power-hungry. As a result, wise caching of a subset of flow rules in TCAM is needed. In this paper, we address two related issues that affect caching efficiency: rules to be cached and rules to be replaced. For the first issue, caching an active rule hit by a flow may need to cache inactive rules due to rule dependency. We propose a two-stage caching architecture called CRAFT, which reduces inactive rules in cache by cutting down long dependent chains and by partitioning rules with massive dependent rules into non-overlapping sub-rules. For the second issue, unawareness of the flow traffic characteristics may evict heavy hitters instead of mice flows. We propose RRTC to address this issue, which is a rule replacement policy taking the real-time network traffic characteristics into consideration. By recognizing the heavy hitters and protecting their matching rules in TCAM, RRTC performs better than least recently used(LRU) policy in terms of cache hit ratio. Simulation results show that our combined rule caching and replacement framework outperforms previous work considerably.
{"title":"Efficient Flow Table Caching Architecture and Replacement Policy for SDN Switches","authors":"Xianfeng Li, Haoran Sun, Yan Huang","doi":"10.1007/s10922-024-09824-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09824-w","url":null,"abstract":"<p>Software-defined networks (SDN) rely on flow tables to forward packets from different flows with different policies. To speed up packet forwarding, the rules in the flow table should reside in the forwarding plane as much as possible to reduce the chances of consulting the SDN controller, which is a slow process. The rules are usually cached in the forwarding plane with a Ternary Content Addressable Memory (TCAM) device. However, a TCAM has limited capacity, because it is expensive and power-hungry. As a result, wise caching of a subset of flow rules in TCAM is needed. In this paper, we address two related issues that affect caching efficiency: <i>rules to be cached</i> and <i>rules to be replaced</i>. For the first issue, caching an active rule hit by a flow may need to cache inactive rules due to rule dependency. We propose a two-stage caching architecture called CRAFT, which reduces inactive rules in cache by cutting down long dependent chains and by partitioning rules with massive dependent rules into non-overlapping sub-rules. For the second issue, unawareness of the flow traffic characteristics may evict heavy hitters instead of mice flows. We propose RRTC to address this issue, which is a rule replacement policy taking the real-time network traffic characteristics into consideration. By recognizing the heavy hitters and protecting their matching rules in TCAM, RRTC performs better than least recently used(LRU) policy in terms of cache hit ratio. Simulation results show that our combined rule caching and replacement framework outperforms previous work considerably.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10922-024-09835-7
Vincent Bracke, José Santos, Tim Wauters, Filip De Turck, Bruno Volckaert
This work describes an approach to enhance container orchestration platforms with an autonomous and dynamic rescheduling system that aims at improving application service time by co-locating highly interdependent containers for network delay reduction. Unreasonable container consolidation may however lead to host CPU saturation, in turn impairing the service time. The multiobjective approach proposed in this work aims to improve application service-time by minimizing both inter-server network traffic and CPU throttling on overloaded servers. To this extent, the Simulated Annealing combinatorial optimization heuristic is used and compared on its relative performance towards the optimal solution obtained by Mathematical Programming. Additionally, the impact of the proposed system is validated on a Kubernetes cluster hosting three concurrent applications, and this under varying load scenarios. The proposed rescheduling system systematically i) improves the application service-time (up to 27.2% from our experiments) and ii) surpasses the improvement reached by the Kubernetes descheduler.
这项工作描述了一种利用自主动态重新安排系统来增强容器编排平台的方法,该系统旨在通过将高度相互依赖的容器放在一起以减少网络延迟,从而改善应用程序的服务时间。然而,不合理的容器合并可能会导致主机 CPU 饱和,进而影响服务时间。本研究提出的多目标方法旨在通过最大限度地减少服务器之间的网络流量和过载服务器的 CPU 节流来改善应用程序的服务时间。为此,我们使用了模拟退火组合优化启发式,并比较了它与数学编程获得的最优解之间的相对性能。此外,在不同的负载情况下,对托管三个并发应用程序的 Kubernetes 集群验证了所提系统的影响。建议的重新调度系统 i) 显著改善了应用服务时间(实验结果高达 27.2%),ii) 超越了 Kubernetes 调度器的改善效果。
{"title":"A Multiobjective Metaheuristic-Based Container Consolidation Model for Cloud Application Performance Improvement","authors":"Vincent Bracke, José Santos, Tim Wauters, Filip De Turck, Bruno Volckaert","doi":"10.1007/s10922-024-09835-7","DOIUrl":"https://doi.org/10.1007/s10922-024-09835-7","url":null,"abstract":"<p>This work describes an approach to enhance container orchestration platforms with an autonomous and dynamic rescheduling system that aims at improving application service time by co-locating highly interdependent containers for network delay reduction. Unreasonable container consolidation may however lead to host CPU saturation, in turn impairing the service time. The multiobjective approach proposed in this work aims to improve application service-time by minimizing both inter-server network traffic and CPU throttling on overloaded servers. To this extent, the Simulated Annealing combinatorial optimization heuristic is used and compared on its relative performance towards the optimal solution obtained by Mathematical Programming. Additionally, the impact of the proposed system is validated on a Kubernetes cluster hosting three concurrent applications, and this under varying load scenarios. The proposed rescheduling system systematically i) improves the application service-time (up to 27.2% from our experiments) and ii) surpasses the improvement reached by the Kubernetes descheduler.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s10922-024-09831-x
Egil Karlsen, Xiao Luo, Nur Zincir-Heywood, Malcolm Heywood
Large Language Models (LLM) continue to demonstrate their utility in a variety of emergent capabilities in different fields. An area that could benefit from effective language understanding in cybersecurity is the analysis of log files. This work explores LLMs with different architectures (BERT, RoBERTa, DistilRoBERTa, GPT-2, and GPT-Neo) that are benchmarked for their capacity to better analyze application and system log files for security. Specifically, 60 fine-tuned language models for log analysis are deployed and benchmarked. The resulting models demonstrate that they can be used to perform log analysis effectively with fine-tuning being particularly important for appropriate domain adaptation to specific log types. The best-performing fine-tuned sequence classification model (DistilRoBERTa) outperforms the current state-of-the-art; with an average F1-Score of 0.998 across six datasets from both web application and system log sources. To achieve this, we propose and implement a new experimentation pipeline (LLM4Sec) which leverages LLMs for log analysis experimentation, evaluation, and analysis.
{"title":"Benchmarking Large Language Models for Log Analysis, Security, and Interpretation","authors":"Egil Karlsen, Xiao Luo, Nur Zincir-Heywood, Malcolm Heywood","doi":"10.1007/s10922-024-09831-x","DOIUrl":"https://doi.org/10.1007/s10922-024-09831-x","url":null,"abstract":"<p>Large Language Models (LLM) continue to demonstrate their utility in a variety of emergent capabilities in different fields. An area that could benefit from effective language understanding in cybersecurity is the analysis of log files. This work explores LLMs with different architectures (BERT, RoBERTa, DistilRoBERTa, GPT-2, and GPT-Neo) that are benchmarked for their capacity to better analyze application and system log files for security. Specifically, 60 fine-tuned language models for log analysis are deployed and benchmarked. The resulting models demonstrate that they can be used to perform log analysis effectively with fine-tuning being particularly important for appropriate domain adaptation to specific log types. The best-performing fine-tuned sequence classification model (DistilRoBERTa) outperforms the current state-of-the-art; with an average F1-Score of 0.998 across six datasets from both web application and system log sources. To achieve this, we propose and implement a new experimentation pipeline (LLM4Sec) which leverages LLMs for log analysis experimentation, evaluation, and analysis.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network Function Virtualization (NFV) is a new technology that allows service providers to improve the cost efficiency of network service provisioning. This is accomplished by decoupling the network functions from the physical environment within which they are deployed and converting them into software components that run on top of commodity hardware. Despite its importance, NFV encounters many challenges at the placement, resource management, and adaptation levels. For example, any placement strategy must take into account the minimization of several factors, including those of hardware resource utilization, network bandwidth and latency. Moreover, Virtual Network Functions (VNFs) should continuously be adjusted to keep up with the changes that occur at both the data center and user levels. Over the past few years several efforts have been made to come up with innovative placement, resource management, and readjustment policies. However, a problem arises when these policies exhibit some conflicts and/or redundancies with one another, since the policies are proposed by multiple sources (e.g., service providers, network administrators, NFV-orchestrators and customers). This constitutes a serious problem for the network service as a whole and has several negative impacts such as Service-Level Agreement (SLA) violations and performance degradation. Besides, as conflicts may occur among a set of policies, pairwise detection will not adequate. In this paper, we tackle this problem by defining a conflict and redundancy detection and an automated resolution mechanisms to identify and solve the issues within and between NFV policies. Finally, we integrate a real-time detection component into our solution to provide continuous and comprehensive conflict and redundancy resolution, as new policies are introduced. The experimental results show that the proposed policy detection and resolution tools could rapidly identify, detect and solve conflicts and redundancies among NFV policies and extremely fast than other frameworks. Furthermore, the results show that our solution is efficient even in scenarios that consist of more than 2000 policies. Moreover, our proposed detection mechanisms can detect and solve the conflicts and redundancies for various types of policies such as placement, scaling and migration.
{"title":"Offline and Real-Time Policy-based Management for Virtualized Services: Conflict and Redundancy Detection, and Automated Resolution","authors":"Hanan Suwi, Nadjia Kara, Omar Abdel Wahab, Claes Edstrom, Yves Lemieux","doi":"10.1007/s10922-024-09830-y","DOIUrl":"https://doi.org/10.1007/s10922-024-09830-y","url":null,"abstract":"<p>Network Function Virtualization (NFV) is a new technology that allows service providers to improve the cost efficiency of network service provisioning. This is accomplished by decoupling the network functions from the physical environment within which they are deployed and converting them into software components that run on top of commodity hardware. Despite its importance, NFV encounters many challenges at the placement, resource management, and adaptation levels. For example, any placement strategy must take into account the minimization of several factors, including those of hardware resource utilization, network bandwidth and latency. Moreover, Virtual Network Functions (VNFs) should continuously be adjusted to keep up with the changes that occur at both the data center and user levels. Over the past few years several efforts have been made to come up with innovative placement, resource management, and readjustment policies. However, a problem arises when these policies exhibit some conflicts and/or redundancies with one another, since the policies are proposed by multiple sources (e.g., service providers, network administrators, NFV-orchestrators and customers). This constitutes a serious problem for the network service as a whole and has several negative impacts such as Service-Level Agreement (SLA) violations and performance degradation. Besides, as conflicts may occur among a set of policies, pairwise detection will not adequate. In this paper, we tackle this problem by defining a conflict and redundancy detection and an automated resolution mechanisms to identify and solve the issues within and between NFV policies. Finally, we integrate a real-time detection component into our solution to provide continuous and comprehensive conflict and redundancy resolution, as new policies are introduced. The experimental results show that the proposed policy detection and resolution tools could rapidly identify, detect and solve conflicts and redundancies among NFV policies and extremely fast than other frameworks. Furthermore, the results show that our solution is efficient even in scenarios that consist of more than 2000 policies. Moreover, our proposed detection mechanisms can detect and solve the conflicts and redundancies for various types of policies such as placement, scaling and migration.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.1007/s10922-024-09836-6
Daniel Soares, Marcos Carvalho, Daniel F. Macedo
The Cloud Gaming sector is burgeoning with an estimated annual growth of more than 50%, poised to reach a market value of $22 billion by 2030, and notably, GeForce Now, launched in 2020, reached 20 million users by August 2022. Cloud gaming presents cost-effective advantages for users and developers by eliminating hardware investments and game purchases, reducing development costs, and optimizing distribution efforts. However, it introduces challenges for network operators and providers, demanding low latency and substantial computational power. User satisfaction in cloud gaming depends on various factors, including game content, network type, and context, all shaping Quality of Experience. This study extends prior research, merging datasets from wired and mobile cloud gaming services to create an Expanded stacking model. All data gathering involves actual users engaging in gameplay within a realistic test environment, employing protocols akin to those utilized by the Geforce Now cloud gaming platform. Results indicate significant improvements in QoE estimation across different gaming contexts, highlighting the feasibility of a versatile predictive model for cloud gaming experiences, building upon previous stacking learning approaches.
{"title":"Enhancing Cloud Gaming QoE Estimation by Stacking Learning","authors":"Daniel Soares, Marcos Carvalho, Daniel F. Macedo","doi":"10.1007/s10922-024-09836-6","DOIUrl":"https://doi.org/10.1007/s10922-024-09836-6","url":null,"abstract":"<p>The Cloud Gaming sector is burgeoning with an estimated annual growth of more than 50%, poised to reach a market value of $22 billion by 2030, and notably, GeForce Now, launched in 2020, reached 20 million users by August 2022. Cloud gaming presents cost-effective advantages for users and developers by eliminating hardware investments and game purchases, reducing development costs, and optimizing distribution efforts. However, it introduces challenges for network operators and providers, demanding low latency and substantial computational power. User satisfaction in cloud gaming depends on various factors, including game content, network type, and context, all shaping Quality of Experience. This study extends prior research, merging datasets from wired and mobile cloud gaming services to create an Expanded stacking model. All data gathering involves actual users engaging in gameplay within a realistic test environment, employing protocols akin to those utilized by the Geforce Now cloud gaming platform. Results indicate significant improvements in QoE estimation across different gaming contexts, highlighting the feasibility of a versatile predictive model for cloud gaming experiences, building upon previous stacking learning approaches.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141526890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-09DOI: 10.1007/s10922-024-09829-5
Reham Aljohani, Anas Bushnag, Ali Alessa
The increasing use of intelligent devices connected to the internet has contributed to the introduction of a new paradigm: the Internet of Things (IoT). The IoT is a set of devices connected via the internet that cooperate to achieve a specific goal. Smart cities, smart airports, smart transportation, smart homes, and many applications in the medical and educational fields all use the IoT. However, one major challenge is detecting malicious intrusions on IoT networks. Intrusion Detection Systems (IDSs) should detect these types of intrusions. This work proposes an effective model for detecting malicious IoT activities using machine learning techniques. The ToN-IoT dataset, which consists of seven connected devices (subdatasets), is used to construct an IoT network. The proposed model is a multilevel classification model. The first level distinguishes between attack and normal network activities. The second level is to classify the types of detected attacks. The experimental results prove the effectiveness of the proposed model in terms of time and classification performance metrics. The proposed model and seven baseline techniques in the literature are compared. The proposed model outperformed the baseline techniques in all subdatasets except for the Garage Door dataset.
{"title":"AI-Based Intrusion Detection for a Secure Internet of Things (IoT)","authors":"Reham Aljohani, Anas Bushnag, Ali Alessa","doi":"10.1007/s10922-024-09829-5","DOIUrl":"https://doi.org/10.1007/s10922-024-09829-5","url":null,"abstract":"<p>The increasing use of intelligent devices connected to the internet has contributed to the introduction of a new paradigm: the Internet of Things (IoT). The IoT is a set of devices connected via the internet that cooperate to achieve a specific goal. Smart cities, smart airports, smart transportation, smart homes, and many applications in the medical and educational fields all use the IoT. However, one major challenge is detecting malicious intrusions on IoT networks. Intrusion Detection Systems (IDSs) should detect these types of intrusions. This work proposes an effective model for detecting malicious IoT activities using machine learning techniques. The ToN-IoT dataset, which consists of seven connected devices (subdatasets), is used to construct an IoT network. The proposed model is a multilevel classification model. The first level distinguishes between attack and normal network activities. The second level is to classify the types of detected attacks. The experimental results prove the effectiveness of the proposed model in terms of time and classification performance metrics. The proposed model and seven baseline techniques in the literature are compared. The proposed model outperformed the baseline techniques in all subdatasets except for the Garage Door dataset.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1007/s10922-024-09833-9
Sk Md Abidar Rahaman, Md Azharuddin, Pratyay Kuila
Wireless power transfer (WPT) technology enables the replenishment of rechargeable battery energy by the sensor nodes (SNs) in wireless rechargeable sensor networks (WRSNs). The deployment of unmanned aerial vehicles (UAVs) as flying chargers to replenish battery energy is established as an emerging technique, especially in harsh environments. The UAV is also operated by limited battery power and, hence, is also power-constrained. Therefore, the UAV has to timely return to the depot to be fully recharged for the next cycle. The SNs should also be timely recharged before they completely deplete their energy. The design of an efficient charging schedule for the charger-UAV for WRSNs is challenging due to the above-mentioned constraints. Moreover, the problem is non-deterministic polynomial hard (NP-hard). This paper addresses the problem of scheduling the charger-UAV to replenish the energy of SNs in WRSNs. A population-based, nature-inspired algorithm, social group optimization (SGO), is employed to design an efficient charging schedule. The flying energy of the UAV is considered to ensure that the UAV will safely and timely return back to the depot. The fitness function is designed with a novel reward-based approach. The proposed work is extensively simulated, and performance comparisons are done along with statistical analysis.
无线充电传感器网络(WRSN)中的传感器节点(SN)可以利用无线功率传输(WPT)技术补充充电电池的能量。部署无人驾驶飞行器(UAV)作为飞行充电器来补充电池能量已成为一种新兴技术,尤其是在恶劣环境中。无人飞行器也是在电池电量有限的情况下运行的,因此也受到电力限制。因此,无人飞行器必须及时返回仓库,为下一个周期充满电。SN 也应在能量完全耗尽之前及时充电。由于上述限制因素,为 WRSN 的充电器-无人机设计一个高效的充电时间表具有挑战性。此外,该问题还具有非确定性多项式难(NP-hard)的特点。本文探讨了在 WRSN 中调度充电器-无人机为 SN 补充能量的问题。本文采用基于群体的自然启发算法--社会群体优化(SGO)来设计高效的充电调度。该算法考虑了无人机的飞行能量,以确保无人机能够安全及时地返回仓库。适配函数采用基于奖励的新方法设计。对提出的工作进行了广泛的模拟,并进行了性能比较和统计分析。
{"title":"Efficient Scheduling of Charger-UAV in Wireless Rechargeable Sensor Networks: Social Group Optimization Based Approach","authors":"Sk Md Abidar Rahaman, Md Azharuddin, Pratyay Kuila","doi":"10.1007/s10922-024-09833-9","DOIUrl":"https://doi.org/10.1007/s10922-024-09833-9","url":null,"abstract":"<p>Wireless power transfer (WPT) technology enables the replenishment of rechargeable battery energy by the sensor nodes (SNs) in wireless rechargeable sensor networks (WRSNs). The deployment of unmanned aerial vehicles (UAVs) as flying chargers to replenish battery energy is established as an emerging technique, especially in harsh environments. The UAV is also operated by limited battery power and, hence, is also power-constrained. Therefore, the UAV has to timely return to the depot to be fully recharged for the next cycle. The SNs should also be timely recharged before they completely deplete their energy. The design of an efficient charging schedule for the charger-UAV for WRSNs is challenging due to the above-mentioned constraints. Moreover, the problem is non-deterministic polynomial hard (NP-hard). This paper addresses the problem of scheduling the charger-UAV to replenish the energy of SNs in WRSNs. A population-based, nature-inspired algorithm, social group optimization (SGO), is employed to design an efficient charging schedule. The flying energy of the UAV is considered to ensure that the UAV will safely and timely return back to the depot. The fitness function is designed with a novel reward-based approach. The proposed work is extensively simulated, and performance comparisons are done along with statistical analysis.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141526891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}