Pub Date : 2024-08-02DOI: 10.1007/s10922-024-09848-2
Nikola Gavric, Guru Prasad Bhandari, Andrii Shalaginov
The Internet of Things (IoT) is omnipresent, exposing a large number of devices that often lack security controls to the public Internet. In the modern world, many everyday processes depend on these devices, and their service outage could lead to catastrophic consequences. There are many Deep Packet Inspection (DPI) based intrusion detection systems (IDS). However, their linear computational complexity induced by the event-driven nature poses a power-demanding obstacle in resource-constrained IoT environments. In this paper, we shift away from the traditional IDS as we introduce a novel and lightweight framework, relying on a time-driven algorithm to detect Distributed Denial of Service (DDoS) attacks by employing Machine Learning (ML) algorithms leveraging the newly engineered features containing system and network utilization information. These features are periodically generated, and there are only ten of them, resulting in a low and constant algorithmic complexity. Moreover, we leverage IoT-specific patterns to detect malicious traffic as we argue that each Denial of Service (DoS) attack leaves a unique fingerprint in the proposed set of features. We construct a dataset by launching some of the most prevalent DoS attacks against an IoT device, and we demonstrate the effectiveness of our approach with high accuracy. The results show that standalone IoT devices can detect and classify DoS and, therefore, arguably, DDoS attacks against them at a low computational cost with a deterministic delay.
物联网(IoT)无处不在,它将大量往往缺乏安全控制的设备暴露在公共互联网上。在现代社会中,许多日常流程都依赖于这些设备,它们的服务中断可能会导致灾难性后果。目前有许多基于深度包检测(DPI)的入侵检测系统(IDS)。然而,在资源有限的物联网环境中,由事件驱动性质引起的线性计算复杂性构成了耗电障碍。在本文中,我们摒弃了传统的 IDS,引入了一种新颖的轻量级框架,依靠时间驱动算法来检测分布式拒绝服务(DDoS)攻击,采用机器学习(ML)算法,利用新设计的包含系统和网络利用率信息的特征。这些特征会定期生成,而且只有十个,因此算法复杂度低且恒定。此外,我们还利用物联网特有的模式来检测恶意流量,因为我们认为每次拒绝服务(DoS)攻击都会在提议的特征集中留下独特的指纹。我们通过对物联网设备发起一些最普遍的 DoS 攻击来构建数据集,并以高准确度证明了我们方法的有效性。结果表明,独立的物联网设备能够以较低的计算成本和确定的延迟检测到 DoS 并对其进行分类,因此也可以说是 DDoS 攻击。
{"title":"Towards Resource-Efficient DDoS Detection in IoT: Leveraging Feature Engineering of System and Network Usage Metrics","authors":"Nikola Gavric, Guru Prasad Bhandari, Andrii Shalaginov","doi":"10.1007/s10922-024-09848-2","DOIUrl":"https://doi.org/10.1007/s10922-024-09848-2","url":null,"abstract":"<p>The Internet of Things (IoT) is omnipresent, exposing a large number of devices that often lack security controls to the public Internet. In the modern world, many everyday processes depend on these devices, and their service outage could lead to catastrophic consequences. There are many Deep Packet Inspection (DPI) based intrusion detection systems (IDS). However, their linear computational complexity induced by the event-driven nature poses a power-demanding obstacle in resource-constrained IoT environments. In this paper, we shift away from the traditional IDS as we introduce a novel and lightweight framework, relying on a time-driven algorithm to detect Distributed Denial of Service (DDoS) attacks by employing Machine Learning (ML) algorithms leveraging the newly engineered features containing system and network utilization information. These features are periodically generated, and there are only ten of them, resulting in a low and constant algorithmic complexity. Moreover, we leverage IoT-specific patterns to detect malicious traffic as we argue that each Denial of Service (DoS) attack leaves a unique fingerprint in the proposed set of features. We construct a dataset by launching some of the most prevalent DoS attacks against an IoT device, and we demonstrate the effectiveness of our approach with high accuracy. The results show that standalone IoT devices can detect and classify DoS and, therefore, arguably, DDoS attacks against them at a low computational cost with a deterministic delay.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"45 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141882802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-31DOI: 10.1007/s10922-024-09843-7
Zbigniew Kotulski, Tomasz Nowak, Mariusz Sepczuk, Krzysztof Bocianiak, Tomasz Pawlikowski, Aleksandra Podlasek, Jean-Philippe Wary
Competing service providers in the cloud environment ensure services are delivered under the promised security requirements. It is crucial for mobile services where user’s movement results in the service’s migration between edge servers or clouds in the Continuum. Maintaining service sovereignty before, during, and after the migration is a real challenge, especially when the service provider has committed to ensuring its quality following the Service Level Agreement. In this paper, we present the main challenges mobile service providers face in a cloud environment to guarantee the required level of security and digital sovereignty as described in the Security Service Level Agreement, with emphasis on challenges resulting from the service migration between the old and new locations. We present the security and sovereignty context intended for migration and the steps of the migration algorithm. We also analyze three specific service migration cases for three vertical industries with different service quality requirements.
{"title":"Keeping Verticals’ Sovereignty During Application Migration in Continuum","authors":"Zbigniew Kotulski, Tomasz Nowak, Mariusz Sepczuk, Krzysztof Bocianiak, Tomasz Pawlikowski, Aleksandra Podlasek, Jean-Philippe Wary","doi":"10.1007/s10922-024-09843-7","DOIUrl":"https://doi.org/10.1007/s10922-024-09843-7","url":null,"abstract":"<p>Competing service providers in the cloud environment ensure services are delivered under the promised security requirements. It is crucial for mobile services where user’s movement results in the service’s migration between edge servers or clouds in the Continuum. Maintaining service sovereignty before, during, and after the migration is a real challenge, especially when the service provider has committed to ensuring its quality following the Service Level Agreement. In this paper, we present the main challenges mobile service providers face in a cloud environment to guarantee the required level of security and digital sovereignty as described in the Security Service Level Agreement, with emphasis on challenges resulting from the service migration between the old and new locations. We present the security and sovereignty context intended for migration and the steps of the migration algorithm. We also analyze three specific service migration cases for three vertical industries with different service quality requirements.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"44 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-30DOI: 10.1007/s10922-024-09844-6
Birglang Bargayary, Nabajyoti Medhi
Integrating Software-Defined Networking (SDN) with the Internet of Things (IoT) simplifies the management of IoT devices; however, it introduces security challenges. Adversaries may manipulate forwarding rules to redirect communication, compromising user security. Additionally, the centralized nature of SDN-enabled IoT networks poses a single point of failure during master controller failure. To address these issues, we present SDBlock-IoT, a distributed SDN architecture based on blockchain technology. This ensures increased resiliency in the event of master controller failure. Our proposed model considers response time and resource utilization of equal controllers, ensuring the most suitable controller assumes the role of master controller. We enhance the integrity of OpenFlow forwarding rules through the Smart Agent and SC, which validate whether a flow is registered on the blockchain or not. The Smart Agent verifies forwarding rules for every new flow request. We conducted experiments on hardware SDN switches using a Ryu OpenFlow controller and a private blockchain, demonstrating the effectiveness of our approach. Evaluation results indicate that SDBlock-IoT outperforms existing solutions in terms of flow verification time, controller recovery time, CPU utilization, and transaction costs.
将软件定义网络(SDN)与物联网(IoT)集成可简化物联网设备的管理,但也带来了安全挑战。攻击者可能会操纵转发规则来重定向通信,从而危及用户安全。此外,启用了 SDN 的物联网网络的集中特性会在主控制器故障时造成单点故障。为了解决这些问题,我们提出了基于区块链技术的分布式 SDN 架构 SDBlock-IoT。这可确保在主控制器失效时提高弹性。我们提出的模型考虑了同等控制器的响应时间和资源利用率,确保最合适的控制器承担主控制器的角色。我们通过智能代理(Smart Agent)和 SC 来增强 OpenFlow 转发规则的完整性,它们可验证流量是否已在区块链上注册。智能代理会验证每个新流量请求的转发规则。我们使用 Ryu OpenFlow 控制器和私有区块链在硬件 SDN 交换机上进行了实验,证明了我们方法的有效性。评估结果表明,SDBlock-IoT 在流量验证时间、控制器恢复时间、CPU 利用率和交易成本方面都优于现有解决方案。
{"title":"SDBlock-IoT: A Blockchain-Enabled Software-Defined Multicontroller Architecture to Safeguard OpenFlow Tables","authors":"Birglang Bargayary, Nabajyoti Medhi","doi":"10.1007/s10922-024-09844-6","DOIUrl":"https://doi.org/10.1007/s10922-024-09844-6","url":null,"abstract":"<p>Integrating Software-Defined Networking (SDN) with the Internet of Things (IoT) simplifies the management of IoT devices; however, it introduces security challenges. Adversaries may manipulate forwarding rules to redirect communication, compromising user security. Additionally, the centralized nature of SDN-enabled IoT networks poses a single point of failure during master controller failure. To address these issues, we present SDBlock-IoT, a distributed SDN architecture based on blockchain technology. This ensures increased resiliency in the event of master controller failure. Our proposed model considers response time and resource utilization of equal controllers, ensuring the most suitable controller assumes the role of master controller. We enhance the integrity of OpenFlow forwarding rules through the Smart Agent and SC, which validate whether a flow is registered on the blockchain or not. The Smart Agent verifies forwarding rules for every new flow request. We conducted experiments on hardware SDN switches using a Ryu OpenFlow controller and a private blockchain, demonstrating the effectiveness of our approach. Evaluation results indicate that SDBlock-IoT outperforms existing solutions in terms of flow verification time, controller recovery time, CPU utilization, and transaction costs.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"217 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141863797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1007/s10922-024-09837-5
Gourav Prateek Sharma, Wouter Tavernier, Didier Colle, Mario Pickavet, Jetmir Haxhibeqiri, Jeroen Hoebeke, Ingrid Moerman
Proprietary communication technologies for time-critical communication in industrial environments are being gradually replaced by Time-sensitive Networking (TSN)-enabled Ethernet. Furthermore, attempts have been made to bring TSN features into wireless networks so that the flexibility of wireless networks can be utilized, and the end-to-end timings for Time-Triggered (TT) streams can be guaranteed. Given a mixed wired-wireless network, the scheduling problem should be solved for a set of TT stream requests. In this paper, we formulate the no-wait scheduling problem for mixed wired-wireless networks as a Mixed Integer Linear Programming (MILP) model with the objective of minimizing the flowspan. We also propose a relaxation of the original MILP in the form of a 2-stage MILP formulation. Next, a scalable approach based on the greedy heuristic is proposed to solve the problem for realistic-size networks. Evaluation results show that the greedy heuristic is suitable for realistic problem sizes where the MILP-based approach is found to be practically infeasible. Furthermore, the impact of wireless requests on the performance of the greedy heuristic is reported.
{"title":"End-to-End No-wait Scheduling for Time-Triggered Streams in Mixed Wired-Wireless Networks","authors":"Gourav Prateek Sharma, Wouter Tavernier, Didier Colle, Mario Pickavet, Jetmir Haxhibeqiri, Jeroen Hoebeke, Ingrid Moerman","doi":"10.1007/s10922-024-09837-5","DOIUrl":"https://doi.org/10.1007/s10922-024-09837-5","url":null,"abstract":"<p>Proprietary communication technologies for time-critical communication in industrial environments are being gradually replaced by Time-sensitive Networking (TSN)-enabled Ethernet. Furthermore, attempts have been made to bring TSN features into wireless networks so that the flexibility of wireless networks can be utilized, and the end-to-end timings for Time-Triggered (TT) streams can be guaranteed. Given a mixed wired-wireless network, the scheduling problem should be solved for a set of TT stream requests. In this paper, we formulate the no-wait scheduling problem for mixed wired-wireless networks as a Mixed Integer Linear Programming (MILP) model with the objective of minimizing the flowspan. We also propose a relaxation of the original MILP in the form of a 2-stage MILP formulation. Next, a scalable approach based on the greedy heuristic is proposed to solve the problem for realistic-size networks. Evaluation results show that the greedy heuristic is suitable for realistic problem sizes where the MILP-based approach is found to be practically infeasible. Furthermore, the impact of wireless requests on the performance of the greedy heuristic is reported.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"40 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141566756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-27DOI: 10.1007/s10922-024-09839-3
Hélio Pesanhane, Wesley R. Bezerra, Fernando Koch, Carlos Westphall
In Agrifood scenarios, where farmers need to ensure that their produce is safely produced, transported, and stored, they rely on a network of IoT devices to monitor conditions such as temperature and humidity throughout the supply chain. However, managing this large-scale IoT environment poses significant challenges, including transparency, traceability, data tampering, and accountability. Blockchain is portrayed as a technology capable of solving the problems of transparency, traceability, data tampering, and accountability, which are key issues in the AgriFood supply chain. Nonetheless, there are challenges related to managing a large-scale IoT environment using the current security, authentication, and access control solutions. To address these issues, we introduce an architecture in which IoT devices record data and store them in the participant’s cloud after validation by endorsing peers following an attribute-based access control (ABAC) policy. This policy allows IoT device owners to specify the physical quantities, value ranges, time periods, and types of data that each device is permitted to measure and transmit. Authorized users can access this data under the ABAC policy contract. Our solution demonstrates efficiency, with 50% of IoT data write requests completed in less than 0.14 s using solo ordering service and 2.5 s with raft ordering service. Data retrieval shows an average latency between 0.34 and 0.57 s and a throughput ranging from 124.8 to 9.9 Transactions Per Second (TPS) for data sizes between 8 and 512 kilobytes. This architecture not only enhances the management of IoT environments in the AgriFood supply chain but also ensures data privacy and security.
{"title":"Distributed AgriFood Supply Chains","authors":"Hélio Pesanhane, Wesley R. Bezerra, Fernando Koch, Carlos Westphall","doi":"10.1007/s10922-024-09839-3","DOIUrl":"https://doi.org/10.1007/s10922-024-09839-3","url":null,"abstract":"<p>In Agrifood scenarios, where farmers need to ensure that their produce is safely produced, transported, and stored, they rely on a network of IoT devices to monitor conditions such as temperature and humidity throughout the supply chain. However, managing this large-scale IoT environment poses significant challenges, including transparency, traceability, data tampering, and accountability. Blockchain is portrayed as a technology capable of solving the problems of transparency, traceability, data tampering, and accountability, which are key issues in the AgriFood supply chain. Nonetheless, there are challenges related to managing a large-scale IoT environment using the current security, authentication, and access control solutions. To address these issues, we introduce an architecture in which IoT devices record data and store them in the participant’s cloud after validation by endorsing peers following an attribute-based access control (ABAC) policy. This policy allows IoT device owners to specify the physical quantities, value ranges, time periods, and types of data that each device is permitted to measure and transmit. Authorized users can access this data under the ABAC policy contract. Our solution demonstrates efficiency, with 50% of IoT data write requests completed in less than 0.14 s using solo ordering service and 2.5 s with raft ordering service. Data retrieval shows an average latency between 0.34 and 0.57 s and a throughput ranging from 124.8 to 9.9 Transactions Per Second (TPS) for data sizes between 8 and 512 kilobytes. This architecture not only enhances the management of IoT environments in the AgriFood supply chain but also ensures data privacy and security.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"15 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s10922-024-09832-w
Van Tong, Cuong Dao, Hai-Anh Tran, Truong X. Tran, Sami Souihi
Smart contracts are decentralized applications that hold a pivotal role in blockchain-based systems. Smart contracts are composed of error-prone programming languages, so it is affected by many vulnerabilities (e.g., time dependence, outdated version, etc.), which can result in a substantial economic loss within the blockchain ecosystem. Therefore, many vulnerability detection tools are designed to detect the vulnerabilities in smart contracts such as Slither, Mythrill and so forth. However, these tools require high processing time and cannot achieve good accuracy with complex smart contracts nowadays. Consequently, many studies have shifted towards using Deep Learning (DL) techniques, which consider bytecode to determine vulnerabilities in smart contracts. However, these mechanisms reveal three main limitations. First, these mechanisms focus on multi-class problems, assuming that a given smart contract contains only a single vulnerability while the smart contract can contain more than one vulnerability. Second, these approaches encounter ineffective word embedding with large input sequences. Third, the learning model in these mechanisms is forced to classify into one of pre-defined labels even when it cannot make decisions accurately, leading to misclassifications. Therefore, in this paper, we propose a multi-label vulnerability classification mechanism using a language model. To deal with the ineffective word embedding, the proposed mechanism not only takes into account the implicit features derived from the language models (e.g., SecBERT, etc.) but also auxiliary features extracted from other word embedding techniques (e.g., TF-IDF, etc.). Besides, a trustworthy neural network model is proposed to reduce the misclassification rate of vulnerability classification. In detail, an additional neuron is added to the output of the model to indicate whether the model is able to make decisions accurately or not. The experimental results illustrate that the trustworthy model outperforms benchmarks (e.g., binary relevance, label powerset, classifier chain, etc.), achieving up to approximately 98% f1-score while requiring low execution time with 26 ms.
{"title":"Enhancing BERT-Based Language Model for Multi-label Vulnerability Detection of Smart Contract in Blockchain","authors":"Van Tong, Cuong Dao, Hai-Anh Tran, Truong X. Tran, Sami Souihi","doi":"10.1007/s10922-024-09832-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09832-w","url":null,"abstract":"<p>Smart contracts are decentralized applications that hold a pivotal role in blockchain-based systems. Smart contracts are composed of error-prone programming languages, so it is affected by many vulnerabilities (e.g., time dependence, outdated version, etc.), which can result in a substantial economic loss within the blockchain ecosystem. Therefore, many vulnerability detection tools are designed to detect the vulnerabilities in smart contracts such as Slither, Mythrill and so forth. However, these tools require high processing time and cannot achieve good accuracy with complex smart contracts nowadays. Consequently, many studies have shifted towards using Deep Learning (DL) techniques, which consider bytecode to determine vulnerabilities in smart contracts. However, these mechanisms reveal three main limitations. First, these mechanisms focus on multi-class problems, assuming that a given smart contract contains only a single vulnerability while the smart contract can contain more than one vulnerability. Second, these approaches encounter ineffective word embedding with large input sequences. Third, the learning model in these mechanisms is forced to classify into one of pre-defined labels even when it cannot make decisions accurately, leading to misclassifications. Therefore, in this paper, we propose a multi-label vulnerability classification mechanism using a language model. To deal with the ineffective word embedding, the proposed mechanism not only takes into account the implicit features derived from the language models (e.g., SecBERT, etc.) but also auxiliary features extracted from other word embedding techniques (e.g., TF-IDF, etc.). Besides, a trustworthy neural network model is proposed to reduce the misclassification rate of vulnerability classification. In detail, an additional neuron is added to the output of the model to indicate whether the model is able to make decisions accurately or not. The experimental results illustrate that the trustworthy model outperforms benchmarks (e.g., binary relevance, label powerset, classifier chain, etc.), achieving up to approximately 98% f1-score while requiring low execution time with 26 ms.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"62 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s10922-024-09838-4
Wilson de Souza, Taufik Abrão
In this work, we investigate Reconfigurable Intelligent Surface (RIS)-aided Full-Duplex (FD)-Simultaneous Wireless Information Power Transfer (SWIPT)-Cooperative non-Orthogonal Multiple Access (C-NOMA) consisting of two paired devices. The device with better channel conditions ((D_1)) is designated to act as a FD relay to assist the device with poor channel conditions ((D_2)). We assume that (D_1) does not use its own battery energy to cooperate but harvests energy by utilizing SWIPT. A practical non-linear Energy Harvesting (EH) model is considered. We first approximate the harvested power as a Gamma Random Variable (RV) via the Moment Matching (MM) technique. This allows us to derive analytical expressions for Outage Probability (OP) and ergodic rate (ER) that are simple to compute yet accurate for a wide range of system parameters, such as EH coefficients and residual Self-Interference (SI) levels, being extensively validated by numerical simulations. The OP and ER expressions reveal how important it is to mitigate the SI in the FD relay mode since, for reasonable values of residual SI coefficient, its detrimental effect on the system performance, is extremely noticeable. Also, numerical results reveal that increasing the number of RIS elements can benefit the cooperative system much more than the non-cooperative one.
在这项工作中,我们研究了由两个配对设备组成的可重构智能表面(RIS)辅助全双工(FD)-同步无线信息功率传输(SWIPT)-合作非正交多址(C-NOMA)。信道条件较好的设备((D_1))被指定为 FD 中继器,协助信道条件较差的设备((D_2))。我们假设 (D_1) 不使用自己的电池能量进行合作,而是利用 SWIPT 收集能量。我们考虑了一个实用的非线性能量收集(EH)模型。我们首先通过矩匹配(Moment Matching,MM)技术将收获的能量近似为伽马随机变量(Ramma Random Variable,RV)。这样,我们就能推导出停电概率 (OP) 和遍历率 (ER) 的分析表达式,这些表达式易于计算,但对于 EH 系数和残余自干扰 (SI) 水平等各种系统参数而言却非常精确,并通过数值模拟进行了广泛验证。OP 和 ER 表达式揭示了在 FD 中继模式下减轻 SI 的重要性,因为对于合理的残余 SI 系数值,其对系统性能的不利影响极为明显。此外,数值结果表明,增加 RIS 单元的数量对合作系统的益处远远大于非合作系统。
{"title":"RIS-aided Cooperative FD-SWIPT-NOMA Performance Over Nakagami-m Channels","authors":"Wilson de Souza, Taufik Abrão","doi":"10.1007/s10922-024-09838-4","DOIUrl":"https://doi.org/10.1007/s10922-024-09838-4","url":null,"abstract":"<p>In this work, we investigate Reconfigurable Intelligent Surface (RIS)-aided Full-Duplex (FD)-Simultaneous Wireless Information Power Transfer (SWIPT)-Cooperative non-Orthogonal Multiple Access (C-NOMA) consisting of two paired devices. The device with better channel conditions (<span>(D_1)</span>) is designated to act as a FD relay to assist the device with poor channel conditions (<span>(D_2)</span>). We assume that <span>(D_1)</span> does not use its own battery energy to cooperate but harvests energy by utilizing SWIPT. A practical non-linear Energy Harvesting (EH) model is considered. We first approximate the harvested power as a Gamma Random Variable (RV) via the Moment Matching (MM) technique. This allows us to derive analytical expressions for Outage Probability (OP) and ergodic rate (ER) that are simple to compute yet accurate for a wide range of system parameters, such as EH coefficients and residual Self-Interference (SI) levels, being extensively validated by numerical simulations. The OP and ER expressions reveal how important it is to mitigate the SI in the FD relay mode since, for reasonable values of residual SI coefficient, its detrimental effect on the system performance, is extremely noticeable. Also, numerical results reveal that increasing the number of RIS elements can benefit the cooperative system much more than the non-cooperative one.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"21 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10922-024-09824-w
Xianfeng Li, Haoran Sun, Yan Huang
Software-defined networks (SDN) rely on flow tables to forward packets from different flows with different policies. To speed up packet forwarding, the rules in the flow table should reside in the forwarding plane as much as possible to reduce the chances of consulting the SDN controller, which is a slow process. The rules are usually cached in the forwarding plane with a Ternary Content Addressable Memory (TCAM) device. However, a TCAM has limited capacity, because it is expensive and power-hungry. As a result, wise caching of a subset of flow rules in TCAM is needed. In this paper, we address two related issues that affect caching efficiency: rules to be cached and rules to be replaced. For the first issue, caching an active rule hit by a flow may need to cache inactive rules due to rule dependency. We propose a two-stage caching architecture called CRAFT, which reduces inactive rules in cache by cutting down long dependent chains and by partitioning rules with massive dependent rules into non-overlapping sub-rules. For the second issue, unawareness of the flow traffic characteristics may evict heavy hitters instead of mice flows. We propose RRTC to address this issue, which is a rule replacement policy taking the real-time network traffic characteristics into consideration. By recognizing the heavy hitters and protecting their matching rules in TCAM, RRTC performs better than least recently used(LRU) policy in terms of cache hit ratio. Simulation results show that our combined rule caching and replacement framework outperforms previous work considerably.
{"title":"Efficient Flow Table Caching Architecture and Replacement Policy for SDN Switches","authors":"Xianfeng Li, Haoran Sun, Yan Huang","doi":"10.1007/s10922-024-09824-w","DOIUrl":"https://doi.org/10.1007/s10922-024-09824-w","url":null,"abstract":"<p>Software-defined networks (SDN) rely on flow tables to forward packets from different flows with different policies. To speed up packet forwarding, the rules in the flow table should reside in the forwarding plane as much as possible to reduce the chances of consulting the SDN controller, which is a slow process. The rules are usually cached in the forwarding plane with a Ternary Content Addressable Memory (TCAM) device. However, a TCAM has limited capacity, because it is expensive and power-hungry. As a result, wise caching of a subset of flow rules in TCAM is needed. In this paper, we address two related issues that affect caching efficiency: <i>rules to be cached</i> and <i>rules to be replaced</i>. For the first issue, caching an active rule hit by a flow may need to cache inactive rules due to rule dependency. We propose a two-stage caching architecture called CRAFT, which reduces inactive rules in cache by cutting down long dependent chains and by partitioning rules with massive dependent rules into non-overlapping sub-rules. For the second issue, unawareness of the flow traffic characteristics may evict heavy hitters instead of mice flows. We propose RRTC to address this issue, which is a rule replacement policy taking the real-time network traffic characteristics into consideration. By recognizing the heavy hitters and protecting their matching rules in TCAM, RRTC performs better than least recently used(LRU) policy in terms of cache hit ratio. Simulation results show that our combined rule caching and replacement framework outperforms previous work considerably.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"2 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s10922-024-09835-7
Vincent Bracke, José Santos, Tim Wauters, Filip De Turck, Bruno Volckaert
This work describes an approach to enhance container orchestration platforms with an autonomous and dynamic rescheduling system that aims at improving application service time by co-locating highly interdependent containers for network delay reduction. Unreasonable container consolidation may however lead to host CPU saturation, in turn impairing the service time. The multiobjective approach proposed in this work aims to improve application service-time by minimizing both inter-server network traffic and CPU throttling on overloaded servers. To this extent, the Simulated Annealing combinatorial optimization heuristic is used and compared on its relative performance towards the optimal solution obtained by Mathematical Programming. Additionally, the impact of the proposed system is validated on a Kubernetes cluster hosting three concurrent applications, and this under varying load scenarios. The proposed rescheduling system systematically i) improves the application service-time (up to 27.2% from our experiments) and ii) surpasses the improvement reached by the Kubernetes descheduler.
这项工作描述了一种利用自主动态重新安排系统来增强容器编排平台的方法,该系统旨在通过将高度相互依赖的容器放在一起以减少网络延迟,从而改善应用程序的服务时间。然而,不合理的容器合并可能会导致主机 CPU 饱和,进而影响服务时间。本研究提出的多目标方法旨在通过最大限度地减少服务器之间的网络流量和过载服务器的 CPU 节流来改善应用程序的服务时间。为此,我们使用了模拟退火组合优化启发式,并比较了它与数学编程获得的最优解之间的相对性能。此外,在不同的负载情况下,对托管三个并发应用程序的 Kubernetes 集群验证了所提系统的影响。建议的重新调度系统 i) 显著改善了应用服务时间(实验结果高达 27.2%),ii) 超越了 Kubernetes 调度器的改善效果。
{"title":"A Multiobjective Metaheuristic-Based Container Consolidation Model for Cloud Application Performance Improvement","authors":"Vincent Bracke, José Santos, Tim Wauters, Filip De Turck, Bruno Volckaert","doi":"10.1007/s10922-024-09835-7","DOIUrl":"https://doi.org/10.1007/s10922-024-09835-7","url":null,"abstract":"<p>This work describes an approach to enhance container orchestration platforms with an autonomous and dynamic rescheduling system that aims at improving application service time by co-locating highly interdependent containers for network delay reduction. Unreasonable container consolidation may however lead to host CPU saturation, in turn impairing the service time. The multiobjective approach proposed in this work aims to improve application service-time by minimizing both inter-server network traffic and CPU throttling on overloaded servers. To this extent, the Simulated Annealing combinatorial optimization heuristic is used and compared on its relative performance towards the optimal solution obtained by Mathematical Programming. Additionally, the impact of the proposed system is validated on a Kubernetes cluster hosting three concurrent applications, and this under varying load scenarios. The proposed rescheduling system systematically i) improves the application service-time (up to 27.2% from our experiments) and ii) surpasses the improvement reached by the Kubernetes descheduler.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"21 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s10922-024-09831-x
Egil Karlsen, Xiao Luo, Nur Zincir-Heywood, Malcolm Heywood
Large Language Models (LLM) continue to demonstrate their utility in a variety of emergent capabilities in different fields. An area that could benefit from effective language understanding in cybersecurity is the analysis of log files. This work explores LLMs with different architectures (BERT, RoBERTa, DistilRoBERTa, GPT-2, and GPT-Neo) that are benchmarked for their capacity to better analyze application and system log files for security. Specifically, 60 fine-tuned language models for log analysis are deployed and benchmarked. The resulting models demonstrate that they can be used to perform log analysis effectively with fine-tuning being particularly important for appropriate domain adaptation to specific log types. The best-performing fine-tuned sequence classification model (DistilRoBERTa) outperforms the current state-of-the-art; with an average F1-Score of 0.998 across six datasets from both web application and system log sources. To achieve this, we propose and implement a new experimentation pipeline (LLM4Sec) which leverages LLMs for log analysis experimentation, evaluation, and analysis.
{"title":"Benchmarking Large Language Models for Log Analysis, Security, and Interpretation","authors":"Egil Karlsen, Xiao Luo, Nur Zincir-Heywood, Malcolm Heywood","doi":"10.1007/s10922-024-09831-x","DOIUrl":"https://doi.org/10.1007/s10922-024-09831-x","url":null,"abstract":"<p>Large Language Models (LLM) continue to demonstrate their utility in a variety of emergent capabilities in different fields. An area that could benefit from effective language understanding in cybersecurity is the analysis of log files. This work explores LLMs with different architectures (BERT, RoBERTa, DistilRoBERTa, GPT-2, and GPT-Neo) that are benchmarked for their capacity to better analyze application and system log files for security. Specifically, 60 fine-tuned language models for log analysis are deployed and benchmarked. The resulting models demonstrate that they can be used to perform log analysis effectively with fine-tuning being particularly important for appropriate domain adaptation to specific log types. The best-performing fine-tuned sequence classification model (DistilRoBERTa) outperforms the current state-of-the-art; with an average F1-Score of 0.998 across six datasets from both web application and system log sources. To achieve this, we propose and implement a new experimentation pipeline (LLM4Sec) which leverages LLMs for log analysis experimentation, evaluation, and analysis.</p>","PeriodicalId":50119,"journal":{"name":"Journal of Network and Systems Management","volume":"20 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141501336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}