Mobile ad-hoc networks (manets) are everywhere. They are the basis for many current technologies (including vanets, iot, etc.), and used in multiple domains (including military, disaster zones, etc.). For them to function, routing protocols have been defined, taking into account the high mobility of network nodes. These protocols, however, are vulnerable to devastating attacks. Many solutions have been proposed for various attacks, including dcfm (Denial Contradictions with Fictitious nodes Mechanism) for the node isolation and gray-hole variants. In this work we present a refinement for dcfm, calculate its cost, and compare alternative algorithms. It will be shown that the entire fictitious mechanism is superfluous for some required security level. Examination of the results when under attack show that using dcfm’s contradiction rules alone achieves the best cost-benefit ratio for networks with and without movement. In terms of packet delivery ratio (pdr), however, the proposed algorithm achieves 93% for a 50-node static network, stabilizing on 100% for 90 nodes and above. When movement is present, the success drops to 67%, which is slightly better than the alternatives examined.
{"title":"Achieving manet protection without the use of superfluous fictitious nodes","authors":"Nadav Schweitzer , Liad Cohen , Tirza Hirst , Amit Dvir , Ariel Stulman","doi":"10.1016/j.comcom.2024.107978","DOIUrl":"10.1016/j.comcom.2024.107978","url":null,"abstract":"<div><div>Mobile ad-hoc networks (<span>manet</span>s) are everywhere. They are the basis for many current technologies (including <span>vanet</span>s, <span>i</span>o<span>t</span>, etc.), and used in multiple domains (including military, disaster zones, etc.). For them to function, routing protocols have been defined, taking into account the high mobility of network nodes. These protocols, however, are vulnerable to devastating attacks. Many solutions have been proposed for various attacks, including <span>dcfm</span> (Denial Contradictions with Fictitious nodes Mechanism) for the node isolation and gray-hole variants. In this work we present a refinement for <span>dcfm</span>, calculate its cost, and compare alternative algorithms. It will be shown that the entire fictitious mechanism is superfluous for some required security level. Examination of the results when under attack show that using <span>dcfm</span>’s contradiction rules alone achieves the best cost-benefit ratio for networks with and without movement. In terms of packet delivery ratio (<span>pdr</span>), however, the proposed algorithm achieves 93% for a 50-node static network, stabilizing on 100% for 90 nodes and above. When movement is present, the success drops to 67%, which is slightly better than the alternatives examined.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107978"},"PeriodicalIF":4.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.comcom.2024.107981
Dimitrios Zorbas, Aruzhan Sabyrbek
LoRaWAN, a low-power wide-area network (LPWAN) technology, has been successfully used in the Internet of Things (IoT) industry over the last decade. It is an easy-to-use, long-distance communication protocol combined with minimal power consumption. Supporting critical downlink traffic in LoRaWAN networks is crucial for ensuring the reliable and efficient delivery of essential data in certain actuating applications. However, challenges arise when prioritizing critical downlink traffic, including commands, alerts, and emergency notifications that demand immediate attention from actuating devices. This paper explores strategies to improve downlink traffic delivery in LoRaWAN networks, focusing on enhancing reliability, fairness, and energy efficiency through prioritization techniques and network parameter configurations in the EU868 spectrum. Theoretical as well as simulation results provide insights into the effectiveness of the available solutions for supporting critical downlink traffic in LoRaWAN networks.
{"title":"Supporting critical downlink traffic in LoRaWAN","authors":"Dimitrios Zorbas, Aruzhan Sabyrbek","doi":"10.1016/j.comcom.2024.107981","DOIUrl":"10.1016/j.comcom.2024.107981","url":null,"abstract":"<div><div>LoRaWAN, a low-power wide-area network (LPWAN) technology, has been successfully used in the Internet of Things (IoT) industry over the last decade. It is an easy-to-use, long-distance communication protocol combined with minimal power consumption. Supporting critical downlink traffic in LoRaWAN networks is crucial for ensuring the reliable and efficient delivery of essential data in certain actuating applications. However, challenges arise when prioritizing critical downlink traffic, including commands, alerts, and emergency notifications that demand immediate attention from actuating devices. This paper explores strategies to improve downlink traffic delivery in LoRaWAN networks, focusing on enhancing reliability, fairness, and energy efficiency through prioritization techniques and network parameter configurations in the EU868 spectrum. Theoretical as well as simulation results provide insights into the effectiveness of the available solutions for supporting critical downlink traffic in LoRaWAN networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107981"},"PeriodicalIF":4.5,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.comcom.2024.107988
Renato Lo Cigno, Stefano Basagni, Paolo Casari
{"title":"Editorial special issue: Extended papers from the 18th wireless on-demand Network Systems and Services “WONS 2023” conference","authors":"Renato Lo Cigno, Stefano Basagni, Paolo Casari","doi":"10.1016/j.comcom.2024.107988","DOIUrl":"10.1016/j.comcom.2024.107988","url":null,"abstract":"","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107988"},"PeriodicalIF":4.5,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142561022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.comcom.2024.107984
Marjan Keramati, Sauleh Etemedi, Nasser Mozayani
Smart grid networks present advantages like improving reliability, security, scalability, etc. However, designing an efficient communication infrastructure for smart grid networks is a great challenge. This is because of its dependency on proprietary protocols and specific vendors. Software-defined-enabled smart grid (SDN-SG) tackles this problem by incorporating diverse protocols and standards including open source platforms. One of the most important questions in Software-defined Networking (SDN) is the controller placement problem being NP-Hard in nature. Therefore, the predominant goal of this paper is to diminish the time complexity by modeling the controller placement problem based on the holonic multi-agent system. The hierarchical structure of a holonic organization improves the computational complexity through the divide and conquer mechanism. Such an idea also decreases the distributed controllers' synchronization overhead which is an issue in the realm of SDN. On the other hand, the proper functioning of the smart grid has a strict dependency on time-critical services. Accordingly, the controller placement is supposed to be a Quality of Service-aware (QoS-aware) one. Also, intermittent topology changes in the smart grid and the occasional joining and leaving of members result in an unsteady traffic pattern and dynamicity of controller load. This research is a pioneer in providing a QoS-aware and dynamic controller placement mechanism for SDN-SG. Experimental results certify the preponderance of the approach over similar ones concerning computational complexity, packet loss, controllers’ synchronization overhead, and also load-balancing overhead.
智能电网网络具有提高可靠性、安全性和可扩展性等优势。然而,为智能电网网络设计高效的通信基础设施是一项巨大的挑战。这是因为它依赖于专有协议和特定的供应商。软件定义的智能电网(SDN-SG)通过整合各种协议和标准(包括开源平台)来解决这一问题。软件定义网络(SDN)中最重要的问题之一是控制器的安置问题,其本质是 NP-Hard。因此,本文的主要目标是通过对基于整体多代理系统的控制器放置问题建模来降低时间复杂性。整体组织的分层结构通过分而治之的机制提高了计算复杂度。这种想法还能减少分布式控制器的同步开销,而同步开销是 SDN 领域的一个问题。另一方面,智能电网的正常运行严格依赖于时间关键型服务。因此,控制器的布置应该是服务质量感知(QoS-aware)的。此外,智能电网中拓扑结构的间歇性变化以及成员的间歇性加入和退出导致了流量模式的不稳定性和控制器负载的动态性。这项研究开创性地为 SDN-SG 提供了 QoS 感知和动态控制器放置机制。实验结果证明,该方法在计算复杂性、数据包丢失、控制器同步开销以及负载平衡开销方面优于同类方法。
{"title":"HMLB: Holonic multi-agent approach for preventive controllers load-balancing in SDN-enabled smart grid","authors":"Marjan Keramati, Sauleh Etemedi, Nasser Mozayani","doi":"10.1016/j.comcom.2024.107984","DOIUrl":"10.1016/j.comcom.2024.107984","url":null,"abstract":"<div><div>Smart grid networks present advantages like improving reliability, security, scalability, etc. However, designing an efficient communication infrastructure for smart grid networks is a great challenge. This is because of its dependency on proprietary protocols and specific vendors. Software-defined-enabled smart grid (SDN-SG) tackles this problem by incorporating diverse protocols and standards including open source platforms. One of the most important questions in Software-defined Networking (SDN) is the controller placement problem being NP-Hard in nature. Therefore, the predominant goal of this paper is to diminish the time complexity by modeling the controller placement problem based on the holonic multi-agent system. The hierarchical structure of a holonic organization improves the computational complexity through the divide and conquer mechanism. Such an idea also decreases the distributed controllers' synchronization overhead which is an issue in the realm of SDN. On the other hand, the proper functioning of the smart grid has a strict dependency on time-critical services. Accordingly, the controller placement is supposed to be a Quality of Service-aware (QoS-aware) one. Also, intermittent topology changes in the smart grid and the occasional joining and leaving of members result in an unsteady traffic pattern and dynamicity of controller load. This research is a pioneer in providing a QoS-aware and dynamic controller placement mechanism for SDN-SG. Experimental results certify the preponderance of the approach over similar ones concerning computational complexity, packet loss, controllers’ synchronization overhead, and also load-balancing overhead.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107984"},"PeriodicalIF":4.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.comcom.2024.107983
Zheheng Rao , Yanyan Xu , Ye Yao , Weizhi Meng
Mobile-centric wireless networks offer users a diverse range of services and experiences. However, existing intelligent routing methods often struggle to make suitable routing decisions during dynamic network changes, significantly limiting transmission performance. This paper proposes a dynamic adaptive routing method based on Deep Reinforcement Learning (DAR-DRL) to effectively address these challenges. First, to accurately model network state information in complex and dynamically changing routing tasks, we introduce a link-aware graph learning model (LA-GNN) that efficiently senses network information of varying structures through a hierarchical aggregated message-passing neural network. Second, to ensure routing reliability in dynamic environments, we design a hop-by-hop routing strategy featuring a large acceptance domain and a reliability guarantee reward function. This mechanism adaptively avoids routing holes and loops across various network scenarios while enhancing the robustness of routing under dynamic conditions. Experimental results demonstrate that the proposed DAR-DRL method achieves the network routing task with shorter end-to-end delays, lower packet loss rates, and higher throughput compared to existing mainstream methods across common dynamic network scenarios, including cases with dynamic traffic variations, random link failures (small topology changes), and significant topology alterations.
{"title":"DAR-DRL: A dynamic adaptive routing method based on deep reinforcement learning","authors":"Zheheng Rao , Yanyan Xu , Ye Yao , Weizhi Meng","doi":"10.1016/j.comcom.2024.107983","DOIUrl":"10.1016/j.comcom.2024.107983","url":null,"abstract":"<div><div>Mobile-centric wireless networks offer users a diverse range of services and experiences. However, existing intelligent routing methods often struggle to make suitable routing decisions during dynamic network changes, significantly limiting transmission performance. This paper proposes a dynamic adaptive routing method based on Deep Reinforcement Learning (DAR-DRL) to effectively address these challenges. First, to accurately model network state information in complex and dynamically changing routing tasks, we introduce a link-aware graph learning model (LA-GNN) that efficiently senses network information of varying structures through a hierarchical aggregated message-passing neural network. Second, to ensure routing reliability in dynamic environments, we design a hop-by-hop routing strategy featuring a large acceptance domain and a reliability guarantee reward function. This mechanism adaptively avoids routing holes and loops across various network scenarios while enhancing the robustness of routing under dynamic conditions. Experimental results demonstrate that the proposed DAR-DRL method achieves the network routing task with shorter end-to-end delays, lower packet loss rates, and higher throughput compared to existing mainstream methods across common dynamic network scenarios, including cases with dynamic traffic variations, random link failures (small topology changes), and significant topology alterations.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107983"},"PeriodicalIF":4.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.comcom.2024.107974
Ana Almeida , Pedro Rito , Susana Brás , Filipe Cabral Pinto , Susana Sargento
The demand for more secure, available, reliable, and fast networks emerges in a more interconnected society. In this context, 5G networks aim to transform how we communicate and interact. However, studies using 5G data are sparse since there are only a few number of publicly available 5G datasets (especially about commercial 5G network metrics with real users).
In this work, we analyze the data of a commercial 5G deployment with real users, and propose forecasting techniques to help understand the trends and to manage 5G networks. We propose the creation of a metric to measure the traffic load. We forecast the metric using several machine learning models, and we choose LightGBM as the best approach. We observe that this approach obtains results with a good accuracy, and better than other machine learning approaches, but its performance decreases if the patterns contain unexpected events. Taking advantage of the lower accuracy in the performance, this is used to detect changes in the patterns and manage the network in real-time, supporting network resource elasticity by generating alarms and automating the scaling during these unpredictable fluctuations.
Moreover, we introduce mobility data and integrate it with the previously traffic load metric, understanding its correlation and the prediction of 5G metrics through the use of the mobility data. We show again that LightGBM is the best model in predicting both types of 5G handovers, intra- and inter-gNB handovers, using the mobility information through Radars in the several roads, and lanes, near the 5G cells.
{"title":"A machine learning approach to forecast 5G metrics in a commercial and operational 5G platform: 5G and mobility","authors":"Ana Almeida , Pedro Rito , Susana Brás , Filipe Cabral Pinto , Susana Sargento","doi":"10.1016/j.comcom.2024.107974","DOIUrl":"10.1016/j.comcom.2024.107974","url":null,"abstract":"<div><div>The demand for more secure, available, reliable, and fast networks emerges in a more interconnected society. In this context, 5G networks aim to transform how we communicate and interact. However, studies using 5G data are sparse since there are only a few number of publicly available 5G datasets (especially about commercial 5G network metrics with real users).</div><div>In this work, we analyze the data of a commercial 5G deployment with real users, and propose forecasting techniques to help understand the trends and to manage 5G networks. We propose the creation of a metric to measure the traffic load. We forecast the metric using several machine learning models, and we choose LightGBM as the best approach. We observe that this approach obtains results with a good accuracy, and better than other machine learning approaches, but its performance decreases if the patterns contain unexpected events. Taking advantage of the lower accuracy in the performance, this is used to detect changes in the patterns and manage the network in real-time, supporting network resource elasticity by generating alarms and automating the scaling during these unpredictable fluctuations.</div><div>Moreover, we introduce mobility data and integrate it with the previously traffic load metric, understanding its correlation and the prediction of 5G metrics through the use of the mobility data. We show again that LightGBM is the best model in predicting both types of 5G handovers, intra- and inter-gNB handovers, using the mobility information through Radars in the several roads, and lanes, near the 5G cells.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107974"},"PeriodicalIF":4.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-20DOI: 10.1016/j.comcom.2024.107979
Ang Deng , Douglas M. Blough
To cope with growing wireless bandwidth demand, millimeter wave (mmWave) communication has been identified as a promising technology to deliver Gbps throughput. However, due to the susceptibility of mmWave signals to blockage, applications can experience significant performance variability as users move around due to rapid and significant variation in channel conditions. In this context, proactive schedulers that make use of future data rate prediction have potential to bring a significant performance improvement as compared to traditional schedulers. In this work, we explore the possibility of proactive scheduling that uses mobility prediction and some knowledge of the environment to predict future channel conditions. We present both an optimal proactive scheduler, which is based on an integer linear programming formulation and provides an upper bound on proactive scheduling performance, and a greedy heuristic proactive scheduler that is suitable for practical implementation. Extensive simulation results show that proactive scheduling has the potential to increase average user data rate by up to 35% over the classic proportional fair scheduler without any loss of fairness and incurring only a small increase in jitter. The results also show that the efficient proactive heuristic scheduler achieves from 60% to 75% of the performance gains of the optimal proactive scheduler. Finally, the results show that proactive scheduling performance is sensitive to the quality of mobility prediction and, thus, use of state-of-the-art mobility prediction techniques will be necessary to realize its full potential.
{"title":"Proactive Scheduling for mmWave Wireless LANs","authors":"Ang Deng , Douglas M. Blough","doi":"10.1016/j.comcom.2024.107979","DOIUrl":"10.1016/j.comcom.2024.107979","url":null,"abstract":"<div><div>To cope with growing wireless bandwidth demand, millimeter wave (mmWave) communication has been identified as a promising technology to deliver Gbps throughput. However, due to the susceptibility of mmWave signals to blockage, applications can experience significant performance variability as users move around due to rapid and significant variation in channel conditions. In this context, proactive schedulers that make use of future data rate prediction have potential to bring a significant performance improvement as compared to traditional schedulers. In this work, we explore the possibility of proactive scheduling that uses mobility prediction and some knowledge of the environment to predict future channel conditions. We present both an optimal proactive scheduler, which is based on an integer linear programming formulation and provides an upper bound on proactive scheduling performance, and a greedy heuristic proactive scheduler that is suitable for practical implementation. Extensive simulation results show that proactive scheduling has the potential to increase average user data rate by up to 35% over the classic proportional fair scheduler without any loss of fairness and incurring only a small increase in jitter. The results also show that the efficient proactive heuristic scheduler achieves from 60% to 75% of the performance gains of the optimal proactive scheduler. Finally, the results show that proactive scheduling performance is sensitive to the quality of mobility prediction and, thus, use of state-of-the-art mobility prediction techniques will be necessary to realize its full potential.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107979"},"PeriodicalIF":4.5,"publicationDate":"2024-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.comcom.2024.107977
Federica de Trizio , Giancarlo Sciddurlo , Ilaria Cianci , Giuseppe Piro , Gennaro Boggia
For many years, the orchestration of network resources and services has been addressed by considering homogeneous communication infrastructures and simple Service Level Agreements (SLAs), generally defined through a list of traditional Key Performance Indicators (KPIs). Unfortunately, state-of-the-art solutions risk being quite ineffective for future telecommunication systems. Beyond 5G networks, for instance, are emerging as complex and heterogeneous ecosystems where resources belonging to diverse network domains with evolving capabilities can be dynamically exposed to support much more complex and cross-domain services and applications. At the same time, SLAs will be defined by also considering novel performance demands, including security, economic, and environmental needs. Based on these premises, this work proposes a novel orchestration strategy designed to fulfill service requirements expressed through Key Value Indicators (KVIs), while combining the potentials of both Network Digital Twins and Intent-Based Networking. Leveraging insights from Network Digital Twins, multiple service orchestration options are explored to optimize resource utilization. Simultaneously, Intent-Based Networking is adopted to streamline network management via intents, specifying Beyond 5G requirements through KPIs and KVIs. An optimal orchestration scheme has been conceived through a multi-criteria decision-making algorithm and a many-to-many matching game between domains and service requests mapped into intents, aiming to minimize SLA violations over time. The performance of the conceived solution has been investigated through computer simulations in realistic scenarios. The obtained results clearly highlight its effectiveness and demonstrate that it is able to reduce SLA violations (related to latency, throughput, costs, and cyber risk requirements) by up to 22.44% compared to other baseline techniques.
{"title":"Optimizing Key Value Indicators in Intent-Based Networks through Digital Twins aided service orchestration mechanisms","authors":"Federica de Trizio , Giancarlo Sciddurlo , Ilaria Cianci , Giuseppe Piro , Gennaro Boggia","doi":"10.1016/j.comcom.2024.107977","DOIUrl":"10.1016/j.comcom.2024.107977","url":null,"abstract":"<div><div>For many years, the orchestration of network resources and services has been addressed by considering homogeneous communication infrastructures and simple Service Level Agreements (SLAs), generally defined through a list of traditional Key Performance Indicators (KPIs). Unfortunately, state-of-the-art solutions risk being quite ineffective for future telecommunication systems. Beyond 5G networks, for instance, are emerging as complex and heterogeneous ecosystems where resources belonging to diverse network domains with evolving capabilities can be dynamically exposed to support much more complex and cross-domain services and applications. At the same time, SLAs will be defined by also considering novel performance demands, including security, economic, and environmental needs. Based on these premises, this work proposes a novel orchestration strategy designed to fulfill service requirements expressed through Key Value Indicators (KVIs), while combining the potentials of both Network Digital Twins and Intent-Based Networking. Leveraging insights from Network Digital Twins, multiple service orchestration options are explored to optimize resource utilization. Simultaneously, Intent-Based Networking is adopted to streamline network management via intents, specifying Beyond 5G requirements through KPIs and KVIs. An optimal orchestration scheme has been conceived through a multi-criteria decision-making algorithm and a many-to-many matching game between domains and service requests mapped into intents, aiming to minimize SLA violations over time. The performance of the conceived solution has been investigated through computer simulations in realistic scenarios. The obtained results clearly highlight its effectiveness and demonstrate that it is able to reduce SLA violations (related to latency, throughput, costs, and cyber risk requirements) by up to 22.44% compared to other baseline techniques.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107977"},"PeriodicalIF":4.5,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.comcom.2024.107975
Shu-Ping Lu , Chin-Laung Lei , Meng-Han Tsai
Proof-of-Authorization (PoA) consensus algorithms are widely used in permissioned blockchain networks due to their high throughput, security, and efficiency. However, PoA is susceptible to cloning attacks, where attackers copy the authenticator identity and key, thereby compromising the consensus integrity. This study proposes a novel randomized authenticator within the PoA framework to mitigate cloning attacks and solve the leader selection bottleneck. The main contributions include 1) Introducing unpredictability in leader selection through Verifiable Random Functions (VRFs) to prevent identity duplication.2) Dynamic group management using a hierarchical decentralized architecture of distributed ledgers that balances security and performance.3) Using threshold signatures to avoid a single point of failure among validators.4) Comprehensively analyzing attacks, security, randomness, and availability.5) Evaluating the effectiveness of a randomized authenticator by means of OMNET++ simulations to assess efficiency. By integrating randomness into leader selection and robust consensus design, the approach enables reliable and secure dynamic group management in decentralized networks.
{"title":"An efficient Proof-of-Authority consensus scheme against cloning attacks","authors":"Shu-Ping Lu , Chin-Laung Lei , Meng-Han Tsai","doi":"10.1016/j.comcom.2024.107975","DOIUrl":"10.1016/j.comcom.2024.107975","url":null,"abstract":"<div><div>Proof-of-Authorization (PoA) consensus algorithms are widely used in permissioned blockchain networks due to their high throughput, security, and efficiency. However, PoA is susceptible to cloning attacks, where attackers copy the authenticator identity and key, thereby compromising the consensus integrity. This study proposes a novel randomized authenticator within the PoA framework to mitigate cloning attacks and solve the leader selection bottleneck. The main contributions include 1) Introducing unpredictability in leader selection through Verifiable Random Functions (VRFs) to prevent identity duplication.2) Dynamic group management using a hierarchical decentralized architecture of distributed ledgers that balances security and performance.3) Using threshold signatures to avoid a single point of failure among validators.4) Comprehensively analyzing attacks, security, randomness, and availability.5) Evaluating the effectiveness of a randomized authenticator by means of OMNET++ simulations to assess efficiency. By integrating randomness into leader selection and robust consensus design, the approach enables reliable and secure dynamic group management in decentralized networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"228 ","pages":"Article 107975"},"PeriodicalIF":4.5,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}