首页 > 最新文献

Computer Communications最新文献

英文 中文
A deep dive into cybersecurity solutions for AI-driven IoT-enabled smart cities in advanced communication networks 深入探讨先进通信网络中人工智能驱动的物联网智能城市的网络安全解决方案
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-12 DOI: 10.1016/j.comcom.2024.108000
Jehad Ali , Sushil Kumar Singh , Weiwei Jiang , Abdulmajeed M. Alenezi , Muhammad Islam , Yousef Ibrahim Daradkeh , Asif Mehmood
The integration of the Internet of Things (IoT) and artificial intelligence (AI) in urban infrastructure, powered by advanced information communication technologies (ICT), has paved the way for smart cities. While these technologies promise enhanced quality of life, economic growth, and improved public services, they also introduce significant cybersecurity challenges. This article comprehensively examines the complex factors in securing AI-driven IoT-enabled smart cities within the framework of future communication networks. Our research addresses critical questions about the evolving threat, multi-layered security approaches, the role of AI in enhancing cybersecurity, and necessary policy frameworks. We conduct an in-depth analysis of cybersecurity solutions across service, application, network, and physical layers, evaluating their effectiveness and integration potential with existing systems. The study offers a detailed examination of AI-driven security approaches, particularly ML and DL techniques, assessing their applicability and limitations in smart city environments. We incorporate real-world case studies to illustrate successful strategies and show areas requiring further research, especially considering emerging communication technologies. Our findings contribute to the field by providing a multi-layered classification of cybersecurity solutions, assessing AI-driven security approaches, and exploring future research directions. Additionally, we investigate the essential role played by policy and regulatory frameworks in safeguarding smart city security. Based on our analysis, we offer recommendations for technical implementations and policy development, aiming to create a holistic approach that balances technological advancements with robust security measures. This study also provides valuable insights for scholars, professionals, and policymakers, offering a comprehensive perspective on the cybersecurity challenges and solutions for AI-driven IoT-enabled smart cities in advanced communication networks.
在先进信息通信技术(ICT)的推动下,物联网(IoT)和人工智能(AI)在城市基础设施中的融合为智慧城市铺平了道路。虽然这些技术有望提高生活质量、促进经济增长和改善公共服务,但它们也带来了重大的网络安全挑战。本文全面探讨了在未来通信网络框架内确保人工智能驱动的物联网智能城市安全的复杂因素。我们的研究涉及不断演变的威胁、多层次安全方法、人工智能在加强网络安全方面的作用以及必要的政策框架等关键问题。我们对服务、应用、网络和物理层的网络安全解决方案进行了深入分析,评估了它们的有效性以及与现有系统集成的潜力。本研究详细考察了人工智能驱动的安全方法,特别是 ML 和 DL 技术,评估了它们在智慧城市环境中的适用性和局限性。我们结合现实世界的案例研究来说明成功的策略,并指出需要进一步研究的领域,特别是考虑到新兴的通信技术。我们的研究结果提供了网络安全解决方案的多层分类,评估了人工智能驱动的安全方法,并探索了未来的研究方向,从而为该领域做出了贡献。此外,我们还研究了政策和监管框架在保障智慧城市安全方面发挥的重要作用。基于我们的分析,我们为技术实施和政策制定提出了建议,旨在创建一种平衡技术进步和健全安全措施的整体方法。本研究还为学者、专业人士和政策制定者提供了宝贵的见解,为先进通信网络中人工智能驱动的物联网智能城市的网络安全挑战和解决方案提供了全面的视角。
{"title":"A deep dive into cybersecurity solutions for AI-driven IoT-enabled smart cities in advanced communication networks","authors":"Jehad Ali ,&nbsp;Sushil Kumar Singh ,&nbsp;Weiwei Jiang ,&nbsp;Abdulmajeed M. Alenezi ,&nbsp;Muhammad Islam ,&nbsp;Yousef Ibrahim Daradkeh ,&nbsp;Asif Mehmood","doi":"10.1016/j.comcom.2024.108000","DOIUrl":"10.1016/j.comcom.2024.108000","url":null,"abstract":"<div><div>The integration of the Internet of Things (IoT) and artificial intelligence (AI) in urban infrastructure, powered by advanced information communication technologies (ICT), has paved the way for smart cities. While these technologies promise enhanced quality of life, economic growth, and improved public services, they also introduce significant cybersecurity challenges. This article comprehensively examines the complex factors in securing AI-driven IoT-enabled smart cities within the framework of future communication networks. Our research addresses critical questions about the evolving threat, multi-layered security approaches, the role of AI in enhancing cybersecurity, and necessary policy frameworks. We conduct an in-depth analysis of cybersecurity solutions across service, application, network, and physical layers, evaluating their effectiveness and integration potential with existing systems. The study offers a detailed examination of AI-driven security approaches, particularly ML and DL techniques, assessing their applicability and limitations in smart city environments. We incorporate real-world case studies to illustrate successful strategies and show areas requiring further research, especially considering emerging communication technologies. Our findings contribute to the field by providing a multi-layered classification of cybersecurity solutions, assessing AI-driven security approaches, and exploring future research directions. Additionally, we investigate the essential role played by policy and regulatory frameworks in safeguarding smart city security. Based on our analysis, we offer recommendations for technical implementations and policy development, aiming to create a holistic approach that balances technological advancements with robust security measures. This study also provides valuable insights for scholars, professionals, and policymakers, offering a comprehensive perspective on the cybersecurity challenges and solutions for AI-driven IoT-enabled smart cities in advanced communication networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 108000"},"PeriodicalIF":4.5,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The pupil outdoes the master: Imperfect demonstration-assisted trust region jamming policy optimization against frequency-hopping spread spectrum 徒弟胜过师傅针对跳频扩频的不完美演示辅助信任区域干扰策略优化
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-10 DOI: 10.1016/j.comcom.2024.107993
Ning Rao, Hua Xu, Zisen Qi, Dan Wang, Yue Zhang, Xiang Peng, Lei Jiang
Jamming decision-making is a pivotal component of modern electromagnetic warfare, wherein recent years have witnessed the extensive application of deep reinforcement learning techniques to enhance the autonomy and intelligence of wireless communication jamming decisions. However, existing researches heavily rely on manually designed customized jamming reward functions, leading to significant consumption of human and computational resources. To this end, under the premise of obviating designing task-customized reward functions, we propose a jamming policy optimization method that learns from imperfect demonstrations to effectively address the complex and high-dimensional jamming resource allocation problem against frequency hopping spread spectrum (FHSS) communication systems. To achieve this, a policy network is meticulously architected to consecutively ascertain jamming schemes for each jamming node, facilitating the construction of the dynamic transition within the Markov decision process. Subsequently, anchored in the dual-trust region concept, we design policy improvement and policy adversarial imitation phases. During the policy improvement phase, the trust region policy optimization method is utilized to refine the policy, while the policy adversarial imitation phase employs adversarial training to guide policy exploration using information embedded in demonstrations. Extensive simulation results indicate that our proposed method can approximate the optimal jamming performance trained under customized reward functions, even with rough binary reward settings, and also significantly surpass demonstration performance.
干扰决策是现代电磁战的关键组成部分,近年来,人们广泛应用深度强化学习技术来增强无线通信干扰决策的自主性和智能性。然而,现有研究严重依赖人工设计的定制干扰奖励函数,导致大量人力和计算资源的消耗。为此,在避免设计任务定制奖励函数的前提下,我们提出了一种干扰策略优化方法,该方法可从不完善的演示中学习,从而有效解决针对跳频扩频(FHSS)通信系统的复杂、高维干扰资源分配问题。为此,我们精心构建了一个策略网络,以连续确定每个干扰节点的干扰方案,从而促进马尔可夫决策过程中动态转换的构建。随后,我们以双信任区域概念为基础,设计了策略改进和策略对抗模仿阶段。在策略改进阶段,我们利用信任区域策略优化方法来完善策略;而在策略对抗模仿阶段,我们利用对抗训练来引导策略探索,并将信息嵌入到演示中。广泛的仿真结果表明,我们提出的方法即使在粗略的二进制奖励设置下,也能逼近在定制奖励函数下训练出的最佳干扰性能,而且还能显著超越演示性能。
{"title":"The pupil outdoes the master: Imperfect demonstration-assisted trust region jamming policy optimization against frequency-hopping spread spectrum","authors":"Ning Rao,&nbsp;Hua Xu,&nbsp;Zisen Qi,&nbsp;Dan Wang,&nbsp;Yue Zhang,&nbsp;Xiang Peng,&nbsp;Lei Jiang","doi":"10.1016/j.comcom.2024.107993","DOIUrl":"10.1016/j.comcom.2024.107993","url":null,"abstract":"<div><div>Jamming decision-making is a pivotal component of modern electromagnetic warfare, wherein recent years have witnessed the extensive application of deep reinforcement learning techniques to enhance the autonomy and intelligence of wireless communication jamming decisions. However, existing researches heavily rely on manually designed customized jamming reward functions, leading to significant consumption of human and computational resources. To this end, under the premise of obviating designing task-customized reward functions, we propose a jamming policy optimization method that learns from imperfect demonstrations to effectively address the complex and high-dimensional jamming resource allocation problem against frequency hopping spread spectrum (FHSS) communication systems. To achieve this, a policy network is meticulously architected to consecutively ascertain jamming schemes for each jamming node, facilitating the construction of the dynamic transition within the Markov decision process. Subsequently, anchored in the dual-trust region concept, we design policy improvement and policy adversarial imitation phases. During the policy improvement phase, the trust region policy optimization method is utilized to refine the policy, while the policy adversarial imitation phase employs adversarial training to guide policy exploration using information embedded in demonstrations. Extensive simulation results indicate that our proposed method can approximate the optimal jamming performance trained under customized reward functions, even with rough binary reward settings, and also significantly surpass demonstration performance.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107993"},"PeriodicalIF":4.5,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-performance BFT consensus for Metaverse through block linking and shortcut loop 通过区块链接和快捷循环实现高性能元宇宙 BFT 共识
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-06 DOI: 10.1016/j.comcom.2024.107990
Rui Hao , Chaozheng Ding , Xiaohai Dai , Hao Fan , Jianwen Xiang
In recent years, the Metaverse has captured increasing attention. As the foundational technologies for these digital realms, blockchain systems and their critical component – the Byzantine Fault Tolerance (BFT) consensus protocol – significantly influence the performance of Metaverse. Due to vulnerabilities to network attacks, synchronous and partially synchronous consensus protocols often face compromises in their liveness or security. Consequently, recent efforts in BFT consensus have shifted towards asynchronous consensus protocols, notably the Multi-valued Validated Binary Agreement (MVBA) protocols, with sMVBA being particularly prominent. Despite its advances, sMVBA struggles to meet the high-performance demands of Metaverse applications. Each sMVBA instance commits only one block, discarding all others, which severely restricts throughput. Moreover, if a leader in a given view crashes, nodes must rebroadcast blocks in the subsequent view, resulting in increased latency.
To overcome these challenges, this paper introduces Mercury, a protocol designed to enhance throughput under various conditions and reduce latency in less favorable scenarios where leaders are crashed. Mercury incorporates a mechanism whereby each block contains hashes from blocks of a previous instance, linking blocks across instances. This structure ensures that once a block is committed, all its linked blocks are also committed, thereby boosting throughput. Additionally, Mercury integrates a ‘shortcut loop’ mechanism, allowing nodes to bypass the last phase of the current view and the block broadcasting in the next view, significantly decreasing latency. Our experimental evaluations of Mercury confirm its superior performance. Compared to the cutting-edge protocols, sMVBA, CKPS, and AMS, Mercury boosts throughput by 1.03X, 1.65X, and 2.51X, respectively.
近年来,"元宇宙"(Metaverse)受到越来越多的关注。作为这些数字领域的基础技术,区块链系统及其关键组件--拜占庭容错(BFT)共识协议--极大地影响着元宇宙的性能。由于容易受到网络攻击,同步和部分同步共识协议的有效性或安全性经常受到影响。因此,最近在 BFT 共识方面的努力转向了异步共识协议,特别是多值验证二进制协议(MVBA),其中 sMVBA 尤为突出。尽管 sMVBA 取得了进步,但它仍难以满足 Metaverse 应用程序的高性能要求。每个 sMVBA 实例只提交一个区块,丢弃所有其他区块,这严重限制了吞吐量。为了克服这些挑战,本文介绍了 Mercury 协议,该协议旨在提高各种条件下的吞吐量,并在领导者崩溃的不利情况下减少延迟。Mercury 采用了一种机制,即每个数据块都包含前一个实例数据块的哈希值,从而在不同实例间链接数据块。这种结构可确保一旦某个区块提交,其所有链接区块也会提交,从而提高吞吐量。此外,Mercury 还集成了 "捷径循环 "机制,允许节点绕过当前视图的最后阶段和下一个视图中的区块广播,从而大大减少了延迟。我们对 Mercury 的实验评估证实了它的卓越性能。与最先进的 sMVBA、CKPS 和 AMS 协议相比,Mercury 的吞吐量分别提高了 1.03 倍、1.65 倍和 2.51 倍。
{"title":"High-performance BFT consensus for Metaverse through block linking and shortcut loop","authors":"Rui Hao ,&nbsp;Chaozheng Ding ,&nbsp;Xiaohai Dai ,&nbsp;Hao Fan ,&nbsp;Jianwen Xiang","doi":"10.1016/j.comcom.2024.107990","DOIUrl":"10.1016/j.comcom.2024.107990","url":null,"abstract":"<div><div>In recent years, the Metaverse has captured increasing attention. As the foundational technologies for these digital realms, blockchain systems and their critical component – <em>the Byzantine Fault Tolerance</em> (BFT) consensus protocol – significantly influence the performance of Metaverse. Due to vulnerabilities to network attacks, synchronous and partially synchronous consensus protocols often face compromises in their liveness or security. Consequently, recent efforts in BFT consensus have shifted towards asynchronous consensus protocols, notably the <em>Multi-valued Validated Binary Agreement</em> (MVBA) protocols, with sMVBA being particularly prominent. Despite its advances, sMVBA struggles to meet the high-performance demands of Metaverse applications. Each sMVBA instance commits only one block, discarding all others, which severely restricts throughput. Moreover, if a leader in a given view crashes, nodes must rebroadcast blocks in the subsequent view, resulting in increased latency.</div><div>To overcome these challenges, this paper introduces <span>Mercury</span>, a protocol designed to enhance throughput under various conditions and reduce latency in less favorable scenarios where leaders are crashed. <span>Mercury</span> incorporates a mechanism whereby each block contains hashes from blocks of a previous instance, linking blocks across instances. This structure ensures that once a block is committed, all its linked blocks are also committed, thereby boosting throughput. Additionally, <span>Mercury</span> integrates a ‘shortcut loop’ mechanism, allowing nodes to bypass the last phase of the current view and the block broadcasting in the next view, significantly decreasing latency. Our experimental evaluations of <span>Mercury</span> confirm its superior performance. Compared to the cutting-edge protocols, sMVBA, CKPS, and AMS, <span>Mercury</span> boosts throughput by 1.03X, 1.65X, and 2.51X, respectively.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107990"},"PeriodicalIF":4.5,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating 5G network slice management for industrial applications 实现面向工业应用的 5G 网络切片管理自动化
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-04 DOI: 10.1016/j.comcom.2024.107991
André Perdigão, José Quevedo, Rui L. Aguiar
The transition to Industry 4.0 introduces new use cases with unique communication requirements, demanding wireless technologies capable of dynamically adjusting their performance to meet various demands. Leveraging network slicing, 5G technology offers the flexibility to support such use cases. However, the usage and deployment of network slices in networks are complex tasks. To increase the adoption of 5G, there is a need for mechanisms that automate the deployment and management of network slices. This paper introduces a design for a network slice manager capable of such mechanisms in 5G networks. This design adheres to related standards, facilitating interoperability with other software, while also considering the capabilities and limitations of the technology. The proposed design can provision custom slices tailored to meet the unique requirements of verticals, offering communication performance across the spectrum of the three primary 5G services (eMBB, URLLC, and mMTC/mIoT). To access the proposed design, a Proof-of-Concept (PoC) prototype was developed and evaluated. The evaluation results demonstrate the flexibility of the proposed solution for deploying slices adjusted to the vertical use cases. Additionally, the slices generated by the PoC maintain a high TRL (Technology Readiness Level) equivalent to that of the commercial-graded network used.
向工业 4.0 的过渡引入了具有独特通信要求的新用例,要求无线技术能够动态调整性能以满足各种需求。借助网络切片,5G 技术可灵活支持此类用例。然而,在网络中使用和部署网络切片是一项复杂的任务。为了提高 5G 的采用率,需要建立能自动部署和管理网络切片的机制。本文介绍了能够在 5G 网络中实现此类机制的网络片管理器的设计。该设计符合相关标准,便于与其他软件互操作,同时也考虑了技术的能力和局限性。所提出的设计可以提供定制切片,以满足垂直行业的独特要求,并提供三种主要 5G 服务(eMBB、URLLC 和 mMTC/MIoT)的通信性能。为实现拟议的设计,开发并评估了一个概念验证(PoC)原型。评估结果表明,所提出的解决方案可灵活部署根据垂直用例调整的切片。此外,PoC 生成的切片保持了较高的 TRL(技术就绪水平),与所使用的商业级网络相当。
{"title":"Automating 5G network slice management for industrial applications","authors":"André Perdigão,&nbsp;José Quevedo,&nbsp;Rui L. Aguiar","doi":"10.1016/j.comcom.2024.107991","DOIUrl":"10.1016/j.comcom.2024.107991","url":null,"abstract":"<div><div>The transition to Industry 4.0 introduces new use cases with unique communication requirements, demanding wireless technologies capable of dynamically adjusting their performance to meet various demands. Leveraging network slicing, 5G technology offers the flexibility to support such use cases. However, the usage and deployment of network slices in networks are complex tasks. To increase the adoption of 5G, there is a need for mechanisms that automate the deployment and management of network slices. This paper introduces a design for a network slice manager capable of such mechanisms in 5G networks. This design adheres to related standards, facilitating interoperability with other software, while also considering the capabilities and limitations of the technology. The proposed design can provision custom slices tailored to meet the unique requirements of verticals, offering communication performance across the spectrum of the three primary 5G services (eMBB, URLLC, and mMTC/mIoT). To access the proposed design, a Proof-of-Concept (PoC) prototype was developed and evaluated. The evaluation results demonstrate the flexibility of the proposed solution for deploying slices adjusted to the vertical use cases. Additionally, the slices generated by the PoC maintain a high TRL (Technology Readiness Level) equivalent to that of the commercial-graded network used.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107991"},"PeriodicalIF":4.5,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MDTA: An efficient, scalable and fast Multiple Disjoint Tree Algorithm for dynamic environments MDTA:适用于动态环境的高效、可扩展且快速的多叉树算法
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-11-02 DOI: 10.1016/j.comcom.2024.107989
Diego Lopez-Pajares , Elisa Rojas , Mankamana Prasad Mishra , Parveen Jindgar , Joaquin Alvarez-Horcajo , Nicolas Manso , Jonathan Desmarais
Emerging applications such as telemedicine, the tactile Internet or live streaming place high demands on low latency to ensure a satisfactory Quality of Experience (QoE). In these scenarios the use of trees can be particularly interesting to efficiently deliver traffic to groups of users because they further enhance network performance by providing redundancy and fault tolerance, ensuring service continuity when network failure or congestion scenarios occur. Furthermore, if trees are isolated from each other (they do not share common communication elements as links and/or nodes), their benefits are further enhanced since events such as failures or congestion in one tree do not affect others. However, the challenge of computing fully disjoint trees (both link- and node-disjoint) introduces significant mathematical complexity, resulting in longer computation times, which negatively impacts latency-sensitive applications.
In this article, we propose a novel algorithm designed to rapidly compute multiple fully (either link- or node-) disjoint trees while maintaining efficiency and scalability, specifically focused on targeting the low-latency requirements of emerging services and applications. The proposed algorithm addresses the complexity of ensuring disjointedness between trees without sacrificing performance. Our solution has been tested in a variety of network environments, including both wired and wireless scenarios.
The results showcase that our proposed method is approximately 100 times faster than existing techniques, while achieving a comparable success rate in terms of number of obtained disjoint trees. This significant improvement in computational speed makes our approach highly suitable for the low-latency requirements of next-generation networks.
远程医疗、触觉互联网或直播流媒体等新兴应用对低延迟提出了很高的要求,以确保令人满意的体验质量(QoE)。在这些应用场景中,使用树形网络向用户群有效传输流量尤为重要,因为树形网络可通过提供冗余和容错功能进一步提高网络性能,确保在网络故障或拥塞情况下服务的连续性。此外,如果树之间相互隔离(它们不共享链路和/或节点等共同通信元素),那么它们的优势就会进一步增强,因为一棵树发生故障或拥塞等事件不会影响到其他树。在本文中,我们提出了一种新型算法,旨在快速计算多个完全(链路或节点)互不相交的树,同时保持效率和可扩展性,特别是针对新兴服务和应用的低延迟要求。所提出的算法在不牺牲性能的前提下,解决了确保树之间不相交的复杂性问题。我们的解决方案已在各种网络环境(包括有线和无线场景)中进行了测试。结果表明,我们提出的方法比现有技术快约 100 倍,同时在获得的不相交树数量方面达到了相当的成功率。计算速度的大幅提升使我们的方法非常适合下一代网络对低延迟的要求。
{"title":"MDTA: An efficient, scalable and fast Multiple Disjoint Tree Algorithm for dynamic environments","authors":"Diego Lopez-Pajares ,&nbsp;Elisa Rojas ,&nbsp;Mankamana Prasad Mishra ,&nbsp;Parveen Jindgar ,&nbsp;Joaquin Alvarez-Horcajo ,&nbsp;Nicolas Manso ,&nbsp;Jonathan Desmarais","doi":"10.1016/j.comcom.2024.107989","DOIUrl":"10.1016/j.comcom.2024.107989","url":null,"abstract":"<div><div>Emerging applications such as telemedicine, the tactile Internet or live streaming place high demands on low latency to ensure a satisfactory Quality of Experience (QoE). In these scenarios the use of trees can be particularly interesting to efficiently deliver traffic to groups of users because they further enhance network performance by providing redundancy and fault tolerance, ensuring service continuity when network failure or congestion scenarios occur. Furthermore, if trees are isolated from each other (they do not share common communication elements as links and/or nodes), their benefits are further enhanced since events such as failures or congestion in one tree do not affect others. However, the challenge of computing fully disjoint trees (both link- and node-disjoint) introduces significant mathematical complexity, resulting in longer computation times, which negatively impacts latency-sensitive applications.</div><div>In this article, we propose a novel algorithm designed to rapidly compute multiple fully (either link- or node-) disjoint trees while maintaining efficiency and scalability, specifically focused on targeting the low-latency requirements of emerging services and applications. The proposed algorithm addresses the complexity of ensuring disjointedness between trees without sacrificing performance. Our solution has been tested in a variety of network environments, including both wired and wireless scenarios.</div><div>The results showcase that our proposed method is approximately 100 times faster than existing techniques, while achieving a comparable success rate in terms of number of obtained disjoint trees. This significant improvement in computational speed makes our approach highly suitable for the low-latency requirements of next-generation networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107989"},"PeriodicalIF":4.5,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe load balancing in software-defined-networking 软件定义网络的安全负载平衡
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-29 DOI: 10.1016/j.comcom.2024.107985
Lam Dinh, Pham Tran Anh Quang, Jérémie Leguay
High performance, reliability and safety are crucial properties of any Software-Defined-Networking (SDN) system. Although the use of Deep Reinforcement Learning (DRL) algorithms has been widely studied to improve performance, their practical applications are still limited as they fail to ensure safe operations in exploration and decision-making. To fill this gap, we explore the design of a Control Barrier Function (CBF) on top of Deep Reinforcement Learning (DRL) algorithms for load-balancing. We show that our DRL-CBF approach is capable of meeting safety requirements during training and testing while achieving near-optimal performance in testing. We provide results using two simulators: a flow-based simulator, which is used for proof-of-concept and benchmarking, and a packet-based simulator that implements real protocols and scheduling. Thanks to the flow-based simulator, we compared the performance against the optimal policy, solving a Non Linear Programming (NLP) problem with the SCIP solver. Furthermore, we showed that pre-trained models in the flow-based simulator, which is faster, can be transferred to the packet simulator, which is slower but more accurate, with some fine-tuning. Overall, the results suggest that near-optimal Quality-of-Service (QoS) performance in terms of end-to-end delay can be achieved while safety requirements related to link capacity constraints are guaranteed. In the packet-based simulator, we also show that our DRL-CBF algorithms outperform non-RL baseline algorithms. When the models are fine-tuned over a few episodes, we achieved smoother QoS and safety in training, and similar performance in testing compared to the case where models have been trained from scratch.
高性能、可靠性和安全性是任何软件定义网络(SDN)系统的关键特性。虽然深度强化学习(DRL)算法的使用已被广泛研究,以提高性能,但其实际应用仍然有限,因为它们无法确保探索和决策过程中的安全操作。为了填补这一空白,我们探索在深度强化学习(DRL)算法的基础上设计一种用于负载平衡的控制障碍函数(CBF)。我们的研究表明,我们的 DRL-CBF 方法能够满足训练和测试期间的安全要求,同时在测试中实现接近最优的性能。我们使用两个模拟器提供了结果:一个是用于概念验证和基准测试的基于流量的模拟器,另一个是实现真实协议和调度的基于数据包的模拟器。借助基于流量的模拟器,我们使用 SCIP 解算器解决了一个非线性编程 (NLP) 问题,并与最优策略进行了性能比较。此外,我们还展示了在基于流量的模拟器中预先训练好的模型(速度更快)可以转移到数据包模拟器中(速度更慢但更精确),只需进行一些微调即可。总之,结果表明,在保证与链路容量限制相关的安全要求的同时,可以实现接近最优的端到端延迟服务质量(QoS)性能。在基于数据包的模拟器中,我们还显示 DRL-CBF 算法优于非 RL 基准算法。当模型经过几次微调后,我们在训练中实现了更平滑的服务质量和安全性,在测试中的表现与从头开始训练模型的情况类似。
{"title":"Safe load balancing in software-defined-networking","authors":"Lam Dinh,&nbsp;Pham Tran Anh Quang,&nbsp;Jérémie Leguay","doi":"10.1016/j.comcom.2024.107985","DOIUrl":"10.1016/j.comcom.2024.107985","url":null,"abstract":"<div><div>High performance, reliability and safety are crucial properties of any Software-Defined-Networking (SDN) system. Although the use of Deep Reinforcement Learning (DRL) algorithms has been widely studied to improve performance, their practical applications are still limited as they fail to ensure safe operations in exploration and decision-making. To fill this gap, we explore the design of a Control Barrier Function (CBF) on top of Deep Reinforcement Learning (DRL) algorithms for load-balancing. We show that our DRL-CBF approach is capable of meeting safety requirements during training and testing while achieving near-optimal performance in testing. We provide results using two simulators: a flow-based simulator, which is used for proof-of-concept and benchmarking, and a packet-based simulator that implements real protocols and scheduling. Thanks to the flow-based simulator, we compared the performance against the optimal policy, solving a Non Linear Programming (NLP) problem with the SCIP solver. Furthermore, we showed that pre-trained models in the flow-based simulator, which is faster, can be transferred to the packet simulator, which is slower but more accurate, with some fine-tuning. Overall, the results suggest that near-optimal Quality-of-Service (QoS) performance in terms of end-to-end delay can be achieved while safety requirements related to link capacity constraints are guaranteed. In the packet-based simulator, we also show that our DRL-CBF algorithms outperform non-RL baseline algorithms. When the models are fine-tuned over a few episodes, we achieved smoother QoS and safety in training, and similar performance in testing compared to the case where models have been trained from scratch.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107985"},"PeriodicalIF":4.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hierarchical adaptive federated reinforcement learning for efficient resource allocation and task scheduling in hierarchical IoT network 分层物联网网络中高效资源分配和任务调度的分层自适应联合强化学习
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-29 DOI: 10.1016/j.comcom.2024.107969
A.S.M. Sharifuzzaman Sagar, Amir Haider, Hyung Seok Kim
The increasing demand for processing numerous data from IoT devices in a hierarchical IoT network drives researchers to propose different resource allocation methods in the edge hosts efficiently. Traditional approaches often compromise on one of these aspects: either prioritizing local decision-making at the edge, which lacks global system insights or centralizing decisions in cloud systems, which raises privacy concerns. Additionally, most solutions do not consider scheduling tasks at the same time to effectively complete the prioritized task accordingly. This study introduces the hierarchical adaptive federated reinforcement learning (HAFedRL) framework for robust resource allocation and task scheduling in hierarchical IoT networks. At the local edge host level, a primal–dual update based deep deterministic policy gradient (DDPG) method is introduced for effective individual task resource allocation and scheduling. Concurrently, the central server utilizes an adaptive multi-objective policy gradient (AMOPG) which integrates a multi-objective policy adaptation (MOPA) with dynamic federated reward aggregation (DFRA) method to allocate resources across connected edge hosts. An adaptive learning rate modulation (ALRM) is proposed for faster convergence and to ensure high performance output from HAFedRL. Our proposed HAFedRL enables the effective integration of reward from edge hosts, ensuring the alignment of local and global optimization goals. The experimental results of HAFedRL showcase its efficacy in improving system-wide utility, average task completion rate, and optimizing resource utilization, establishing it as a robust solution for hierarchical IoT networks.
在分层物联网网络中处理来自物联网设备的大量数据的需求日益增长,促使研究人员提出了在边缘主机中有效分配资源的不同方法。传统方法通常会在其中一个方面做出妥协:要么优先考虑边缘的本地决策,缺乏对全局系统的洞察力;要么将决策集中在云系统中,引发隐私问题。此外,大多数解决方案都没有考虑同时调度任务,以有效完成相应的优先任务。本研究介绍了分层自适应联合强化学习(HAFedRL)框架,用于在分层物联网网络中进行稳健的资源分配和任务调度。在本地边缘主机层面,引入了一种基于原始-双重更新的深度确定性策略梯度(DDPG)方法,以实现有效的单个任务资源分配和调度。与此同时,中央服务器利用自适应多目标策略梯度(AMOPG),将多目标策略自适应(MOPA)与动态联合奖励聚合(DFRA)方法相结合,在连接的边缘主机间分配资源。为了加快收敛速度并确保 HAFedRL 的高性能输出,我们提出了自适应学习率调制 (ALRM)。我们提出的 HAFedRL 能够有效整合来自边缘主机的奖励,确保局部和全局优化目标的一致性。HAFedRL 的实验结果展示了其在提高全系统效用、平均任务完成率和优化资源利用率方面的功效,并将其确立为分层物联网网络的稳健解决方案。
{"title":"A hierarchical adaptive federated reinforcement learning for efficient resource allocation and task scheduling in hierarchical IoT network","authors":"A.S.M. Sharifuzzaman Sagar,&nbsp;Amir Haider,&nbsp;Hyung Seok Kim","doi":"10.1016/j.comcom.2024.107969","DOIUrl":"10.1016/j.comcom.2024.107969","url":null,"abstract":"<div><div>The increasing demand for processing numerous data from IoT devices in a hierarchical IoT network drives researchers to propose different resource allocation methods in the edge hosts efficiently. Traditional approaches often compromise on one of these aspects: either prioritizing local decision-making at the edge, which lacks global system insights or centralizing decisions in cloud systems, which raises privacy concerns. Additionally, most solutions do not consider scheduling tasks at the same time to effectively complete the prioritized task accordingly. This study introduces the hierarchical adaptive federated reinforcement learning (HAFedRL) framework for robust resource allocation and task scheduling in hierarchical IoT networks. At the local edge host level, a primal–dual update based deep deterministic policy gradient (DDPG) method is introduced for effective individual task resource allocation and scheduling. Concurrently, the central server utilizes an adaptive multi-objective policy gradient (AMOPG) which integrates a multi-objective policy adaptation (MOPA) with dynamic federated reward aggregation (DFRA) method to allocate resources across connected edge hosts. An adaptive learning rate modulation (ALRM) is proposed for faster convergence and to ensure high performance output from HAFedRL. Our proposed HAFedRL enables the effective integration of reward from edge hosts, ensuring the alignment of local and global optimization goals. The experimental results of HAFedRL showcase its efficacy in improving system-wide utility, average task completion rate, and optimizing resource utilization, establishing it as a robust solution for hierarchical IoT networks.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107969"},"PeriodicalIF":4.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
5G core network control plane: Network security challenges and solution requirements 5G 核心网络控制平面:网络安全挑战和解决方案要求
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-28 DOI: 10.1016/j.comcom.2024.107982
Rajendra Patil , Zixu Tian , Mohan Gurusamy , Joshua McCloud
The control plane of the 5G Core Network (5GCN) is essential for ensuring reliable and high-performance 5G communication. It provides critical network services such as authentication, user credentials, and privacy-sensitive signaling. However, the security threat landscape of the 5GCN control plane has largely expanded and it faces serious security threats from various sources and interfaces. In this paper, we analyze the new features and vulnerabilities of the 5GCN service-based architecture (SBA) with a focus on the control plane. We investigate the network threat surface in the 5GCN and outline potential vulnerabilities in the control plane. We develop a threat model to illustrate the potential threat sources, vulnerable interfaces, possible threats and their impacts. We provide a comprehensive survey of the existing security solutions, identify their challenges and propose possible solution requirements to address the network security challenges in the control plane of 5GCN and beyond.
5G 核心网(5GCN)的控制平面对于确保可靠和高性能的 5G 通信至关重要。它提供关键的网络服务,如身份验证、用户凭证和隐私敏感信令。然而,5GCN 控制平面的安全威胁范围已大大扩展,面临着来自各种来源和接口的严重安全威胁。在本文中,我们分析了 5GCN 基于服务的架构(SBA)的新功能和漏洞,重点关注控制平面。我们调查了 5GCN 中的网络威胁面,并概述了控制平面中的潜在漏洞。我们建立了一个威胁模型来说明潜在的威胁源、易受攻击的接口、可能的威胁及其影响。我们对现有的安全解决方案进行了全面调查,确定了它们所面临的挑战,并提出了可能的解决方案要求,以应对 5GCN 控制平面及其他方面的网络安全挑战。
{"title":"5G core network control plane: Network security challenges and solution requirements","authors":"Rajendra Patil ,&nbsp;Zixu Tian ,&nbsp;Mohan Gurusamy ,&nbsp;Joshua McCloud","doi":"10.1016/j.comcom.2024.107982","DOIUrl":"10.1016/j.comcom.2024.107982","url":null,"abstract":"<div><div>The control plane of the 5G Core Network (5GCN) is essential for ensuring reliable and high-performance 5G communication. It provides critical network services such as authentication, user credentials, and privacy-sensitive signaling. However, the security threat landscape of the 5GCN control plane has largely expanded and it faces serious security threats from various sources and interfaces. In this paper, we analyze the new features and vulnerabilities of the 5GCN service-based architecture (SBA) with a focus on the control plane. We investigate the network threat surface in the 5GCN and outline potential vulnerabilities in the control plane. We develop a threat model to illustrate the potential threat sources, vulnerable interfaces, possible threats and their impacts. We provide a comprehensive survey of the existing security solutions, identify their challenges and propose possible solution requirements to address the network security challenges in the control plane of 5GCN and beyond.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107982"},"PeriodicalIF":4.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Impact of the Burst Size in the FTM Ranging Procedure in COTS Wi-Fi Devices 评估 COTS Wi-Fi 设备 FTM 测距程序中突发大小的影响
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-28 DOI: 10.1016/j.comcom.2024.107980
Enrica Zola, Israel Martin-Escalona
The wide availability of the Wi-Fi infrastructure together with the recent integration of the IEEE 802.11mc capability in common-off-the-shelf (COTS) devices, have contributed to increasing the interest of the research community in the fine time measurement (FTM) technique, which allows two Wi-Fi devices to estimate distances between each other. However, one of the main issues yet to be solved is how it scales with an increasing number of Wi-Fi devices injecting location-specific traffic in the shared medium. While the recently released IEEE 802.11az standard will still take time before being integrated in COTS devices, this paper aims at assessing the impact of the burst size on the ranging performance of COTS Wi-Fi Android devices running release 12. While increasing the burst size is expected to bring higher stability in the observed distance, on the other hand a longer transmission period for location-only purposes may jeopardize the transmission of data traffic among Wi-Fi users. Several models of smartphones and APs, and different frequency bands, have been considered in order to evaluate the behavior of the FTM procedure in real devices, showing that not always the newest or most expensive ones perform better. Also, it is shown that using the minimum burst size significantly decreases the performance and it is thus not recommended. While bursts longer than 8 may no be always supported by all the models and/or frequency bands, the small improvements in ranging estimations obtained when they are used do not always justify the increased location traffic injected in the network.
Wi-Fi 基础设施的广泛应用,以及最近在通用现成(COTS)设备中集成的 IEEE 802.11mc 功能,都有助于提高研究界对精细时间测量(FTM)技术的兴趣,该技术允许两个 Wi-Fi 设备估算彼此间的距离。然而,该技术尚未解决的一个主要问题是,当越来越多的 Wi-Fi 设备在共享介质中注入特定位置的流量时,如何进行扩展。虽然最近发布的 IEEE 802.11az 标准还需要一段时间才能集成到 COTS 设备中,但本文旨在评估突发大小对运行第 12 版的 COTS Wi-Fi Android 设备测距性能的影响。虽然增大突发大小有望提高观测距离的稳定性,但另一方面,出于定位目的而延长传输时间可能会损害 Wi-Fi 用户之间的数据流量传输。为了评估 FTM 程序在实际设备中的表现,我们考虑了多种型号的智能手机和接入点以及不同的频段,结果表明并非最新或最昂贵的设备总是表现更好。此外,研究还表明,使用最小突发大小会大大降低性能,因此不建议使用。虽然所有型号和/或频段可能都不支持长度超过 8 的突发,但在使用这些突发时,测距估算的微小改进并不总能证明在网络中注入的定位流量的增加是合理的。
{"title":"Assessing the Impact of the Burst Size in the FTM Ranging Procedure in COTS Wi-Fi Devices","authors":"Enrica Zola,&nbsp;Israel Martin-Escalona","doi":"10.1016/j.comcom.2024.107980","DOIUrl":"10.1016/j.comcom.2024.107980","url":null,"abstract":"<div><div>The wide availability of the Wi-Fi infrastructure together with the recent integration of the IEEE 802.11mc capability in common-off-the-shelf (COTS) devices, have contributed to increasing the interest of the research community in the fine time measurement (FTM) technique, which allows two Wi-Fi devices to estimate distances between each other. However, one of the main issues yet to be solved is how it scales with an increasing number of Wi-Fi devices injecting location-specific traffic in the shared medium. While the recently released IEEE 802.11az standard will still take time before being integrated in COTS devices, this paper aims at assessing the impact of the burst size on the ranging performance of COTS Wi-Fi Android devices running release 12. While increasing the burst size is expected to bring higher stability in the observed distance, on the other hand a longer transmission period for location-only purposes may jeopardize the transmission of data traffic among Wi-Fi users. Several models of smartphones and APs, and different frequency bands, have been considered in order to evaluate the behavior of the FTM procedure in real devices, showing that not always the newest or most expensive ones perform better. Also, it is shown that using the minimum burst size significantly decreases the performance and it is thus not recommended. While bursts longer than 8 may no be always supported by all the models and/or frequency bands, the small improvements in ranging estimations obtained when they are used do not always justify the increased location traffic injected in the network.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107980"},"PeriodicalIF":4.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142571857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Communication-efficient heterogeneous multi-UAV task allocation based on clustering 基于聚类的通信高效异构多无人机任务分配
IF 4.5 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-28 DOI: 10.1016/j.comcom.2024.107986
Na Dong, Shuai Liu, Xiaoming Mai
The heterogeneous unmanned aerial vehicle (UAV) system aims to achieve higher-level task coordination and execution by integrating UAVs of different types, functionalities, and scales. Addressing the diverse and complex requirements of tasks, the allocation algorithm for decentralized multi-UAV systems often encounters communication redundancy, leading to the issue of excessive communication overhead. This paper proposes a clustering-based Consensus-Based Bundle Algorithm (Clustering-CBBA), which introduces a novel bundle construction, an improved consensus strategy, and a distance-based UAV grouping approach. Specifically, utilizing the k-means++ method based on distance factors, UAVs are initially partitioned into different clusters, breaking down the large-scale problem into smaller ones. Subsequently, the first UAV in each cluster is designated as the leader UAV. The proposed algorithm can handle multi-UAV tasks by improving the task bundle construction method and consensus algorithm. Additionally, intra-cluster UAVs employ an internal conflict resolution method to gather the latest information, while inter-cluster UAVs use an external conflict resolution method to ensure conflict-free task allocation, continuing until the algorithm converges. Experimental results demonstrate that the proposed method, compared to DMCHBA, G-CBBA, and baseline CBBA, significantly reduces communication overhead across different task scales and UAV quantities. Moreover, it maintains ideal performance regarding task completion and global task reward, showcasing higher efficiency and practicality.
异构无人飞行器(UAV)系统旨在通过整合不同类型、功能和规模的无人飞行器,实现更高层次的任务协调和执行。针对任务需求的多样性和复杂性,分散式多无人机系统的分配算法往往会遇到通信冗余的问题,导致通信开销过大。本文提出了一种基于聚类的共识捆绑算法(Clustering-CBBA),它引入了一种新颖的捆绑构造、一种改进的共识策略和一种基于距离的无人机分组方法。具体来说,利用基于距离因子的 k-means++ 方法,最初将无人飞行器划分为不同的群组,从而将大规模问题分解为更小的问题。随后,每个簇中的第一架无人机被指定为领头无人机。通过改进任务束构建方法和共识算法,拟议算法可以处理多无人机任务。此外,簇内无人机采用内部冲突解决方法收集最新信息,而簇间无人机则采用外部冲突解决方法确保无冲突任务分配,直至算法收敛。实验结果表明,与 DMCHBA、G-CBBA 和基线 CBBA 相比,所提出的方法显著降低了不同任务规模和无人机数量下的通信开销。此外,它在任务完成和全局任务奖励方面保持了理想的性能,展示了更高的效率和实用性。
{"title":"Communication-efficient heterogeneous multi-UAV task allocation based on clustering","authors":"Na Dong,&nbsp;Shuai Liu,&nbsp;Xiaoming Mai","doi":"10.1016/j.comcom.2024.107986","DOIUrl":"10.1016/j.comcom.2024.107986","url":null,"abstract":"<div><div>The heterogeneous unmanned aerial vehicle (UAV) system aims to achieve higher-level task coordination and execution by integrating UAVs of different types, functionalities, and scales. Addressing the diverse and complex requirements of tasks, the allocation algorithm for decentralized multi-UAV systems often encounters communication redundancy, leading to the issue of excessive communication overhead. This paper proposes a clustering-based Consensus-Based Bundle Algorithm (Clustering-CBBA), which introduces a novel bundle construction, an improved consensus strategy, and a distance-based UAV grouping approach. Specifically, utilizing the k-means++ method based on distance factors, UAVs are initially partitioned into different clusters, breaking down the large-scale problem into smaller ones. Subsequently, the first UAV in each cluster is designated as the leader UAV. The proposed algorithm can handle multi-UAV tasks by improving the task bundle construction method and consensus algorithm. Additionally, intra-cluster UAVs employ an internal conflict resolution method to gather the latest information, while inter-cluster UAVs use an external conflict resolution method to ensure conflict-free task allocation, continuing until the algorithm converges. Experimental results demonstrate that the proposed method, compared to DMCHBA, G-CBBA, and baseline CBBA, significantly reduces communication overhead across different task scales and UAV quantities. Moreover, it maintains ideal performance regarding task completion and global task reward, showcasing higher efficiency and practicality.</div></div>","PeriodicalId":55224,"journal":{"name":"Computer Communications","volume":"229 ","pages":"Article 107986"},"PeriodicalIF":4.5,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Communications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1