首页 > 最新文献

Ad Hoc Networks最新文献

英文 中文
Generalized stochastic Petri net-based performance analysis of a Wi-Fi network probe in a dynamic QoX management system 基于广义随机 Petri 网的动态 QoX 管理系统中 Wi-Fi 网络探测器性能分析
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-11 DOI: 10.1016/j.adhoc.2024.103683
Luis Zabala, Leire Cristobo, Eva Ibarrola, Armando Ferro
Over the years, the concept of Quality of Service (QoS) has evolved from traditional network performance metrics to include Quality of Experience (QoE) considerations. This evolution also encompasses various business-related aspects, such as the impact of service quality on customer satisfaction, the alignment of service offerings with market demands, and the optimization of resource allocation to ensure cost-effectiveness and competitive advantage. This comprehensive approach, considering all the QoS dimensions (QoX), ensures the proper management of QoS across different services, contexts and technologies. Building on this broader QoX framework, it is essential to rely on advanced monitoring tools capable of handling the complexity introduced by these new demands. In this context, this paper describes a Generalized Stochastic Petri Net (GSPN) based model to analyze the performance of a Wi-Fi network probe in terms of computational capacity. The probe node plays a crucial role in a distributed monitoring system designed to implement a machine learning based global QoX management framework. Hence, the model explores the probe's computational resources to handle supplementary machine learning tasks alongside its typical packet capture and data processing responsibilities. Additionally, the model can evaluate the efficiency of the probe node under different scenarios, providing valuable insight into the potential need for additional resources at the node as operational demands continue to evolve.
多年来,服务质量(QoS)的概念已从传统的网络性能指标发展到包括体验质量(QoE)的考虑因素。这一演变还包括各种与业务相关的方面,如服务质量对客户满意度的影响、服务产品与市场需求的一致性,以及优化资源分配以确保成本效益和竞争优势。这种考虑到所有 QoS 维度(QoX)的综合方法可确保对不同服务、环境和技术的 QoS 进行适当管理。在这个更广泛的 QoX 框架基础上,必须依靠先进的监控工具来处理这些新需求带来的复杂性。在此背景下,本文介绍了一种基于广义随机 Petri 网(GSPN)的模型,用于分析 Wi-Fi 网络探测器在计算能力方面的性能。探针节点在分布式监控系统中发挥着重要作用,该系统旨在实施基于机器学习的全局 QoX 管理框架。因此,该模型探索了探针的计算资源,以便在完成典型的数据包捕获和数据处理任务的同时,处理辅助机器学习任务。此外,该模型还能评估探针节点在不同场景下的效率,从而提供有价值的洞察力,以了解随着运营需求的不断发展,节点对额外资源的潜在需求。
{"title":"Generalized stochastic Petri net-based performance analysis of a Wi-Fi network probe in a dynamic QoX management system","authors":"Luis Zabala,&nbsp;Leire Cristobo,&nbsp;Eva Ibarrola,&nbsp;Armando Ferro","doi":"10.1016/j.adhoc.2024.103683","DOIUrl":"10.1016/j.adhoc.2024.103683","url":null,"abstract":"<div><div>Over the years, the concept of Quality of Service (QoS) has evolved from traditional network performance metrics to include Quality of Experience (QoE) considerations. This evolution also encompasses various business-related aspects, such as the impact of service quality on customer satisfaction, the alignment of service offerings with market demands, and the optimization of resource allocation to ensure cost-effectiveness and competitive advantage. This comprehensive approach, considering all the QoS dimensions (QoX), ensures the proper management of QoS across different services, contexts and technologies. Building on this broader QoX framework, it is essential to rely on advanced monitoring tools capable of handling the complexity introduced by these new demands. In this context, this paper describes a Generalized Stochastic Petri Net (GSPN) based model to analyze the performance of a Wi-Fi network probe in terms of computational capacity. The probe node plays a crucial role in a distributed monitoring system designed to implement a machine learning based global QoX management framework. Hence, the model explores the probe's computational resources to handle supplementary machine learning tasks alongside its typical packet capture and data processing responsibilities. Additionally, the model can evaluate the efficiency of the probe node under different scenarios, providing valuable insight into the potential need for additional resources at the node as operational demands continue to evolve.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103683"},"PeriodicalIF":4.4,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDQN-based online computation offloading and application caching for dynamic edge computing service management 基于 DDQN 的在线计算卸载和应用缓存,用于动态边缘计算服务管理
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-11 DOI: 10.1016/j.adhoc.2024.103681
Shudong Wang, Zhi Lu, Haiyuan Gui, Xiao He, Shengzhe Zhao, Zixuan Fan, Yanxiang Zhang, Shanchen Pang
Multi-access Edge Computing (MEC) reduces task service latency and energy consumption by offloading computing tasks to MEC servers. However, constrained by the limited bandwidth and computing resources, MEC servers often cannot parallelly process all computing tasks. Simultaneously, the high dynamism of service popularity necessitates MEC servers to dynamically update cached applications, under ensuring compliance with storage resource constraints and the system cache updating cost budget for service providers. In response to the above two issues, this paper firstly formulates computation offloading and application caching as a dual-timescale decision optimization problem, aiming to minimize the average service latency for users by obtaining optimal offloading decision, cache decision, transmission bandwidth, and computing resource. Then, we propose a Deep Reinforcement Learning (DRL)-based two-stage online computation offloading and application caching (DTSO2C) algorithm, effectively stabilizing application cache update costs and enhancing Quality of Service (QoS) for users. Furthermore, we utilize convex optimization algorithms to derive the optimal communication bandwidth and computing resource allocation strategy, further reducing the average service latency for users. Simulation results demonstrate that the DTSO2C algorithm outperforms the compared algorithms, achieving an average reduction in service latency of 66.2%, with an average cache update cost of only 0.15 USD per time frame.
多访问边缘计算(MEC)通过将计算任务卸载到 MEC 服务器来减少任务服务延迟和能源消耗。然而,受限于有限的带宽和计算资源,MEC 服务器往往无法并行处理所有计算任务。同时,由于服务流行的高动态性,MEC 服务器必须在确保符合存储资源限制和服务提供商的系统缓存更新成本预算的前提下,动态更新缓存应用程序。针对上述两个问题,本文首先将计算卸载和应用缓存表述为一个双时间尺度的决策优化问题,旨在通过获得最优的卸载决策、缓存决策、传输带宽和计算资源来最小化用户的平均服务延迟。然后,我们提出了基于深度强化学习(DRL)的两阶段在线计算卸载和应用缓存(DTSO2C)算法,有效地稳定了应用缓存更新成本,提高了用户的服务质量(QoS)。此外,我们还利用凸优化算法得出了最佳通信带宽和计算资源分配策略,进一步降低了用户的平均服务延迟。仿真结果表明,DTSO2C 算法优于其他算法,平均减少了 66.2% 的服务延迟,每个时间段的平均缓存更新成本仅为 0.15 美元。
{"title":"DDQN-based online computation offloading and application caching for dynamic edge computing service management","authors":"Shudong Wang,&nbsp;Zhi Lu,&nbsp;Haiyuan Gui,&nbsp;Xiao He,&nbsp;Shengzhe Zhao,&nbsp;Zixuan Fan,&nbsp;Yanxiang Zhang,&nbsp;Shanchen Pang","doi":"10.1016/j.adhoc.2024.103681","DOIUrl":"10.1016/j.adhoc.2024.103681","url":null,"abstract":"<div><div>Multi-access Edge Computing (MEC) reduces task service latency and energy consumption by offloading computing tasks to MEC servers. However, constrained by the limited bandwidth and computing resources, MEC servers often cannot parallelly process all computing tasks. Simultaneously, the high dynamism of service popularity necessitates MEC servers to dynamically update cached applications, under ensuring compliance with storage resource constraints and the system cache updating cost budget for service providers. In response to the above two issues, this paper firstly formulates computation offloading and application caching as a dual-timescale decision optimization problem, aiming to minimize the average service latency for users by obtaining optimal offloading decision, cache decision, transmission bandwidth, and computing resource. Then, we propose a Deep Reinforcement Learning (DRL)-based two-stage online computation offloading and application caching (DTSO2C) algorithm, effectively stabilizing application cache update costs and enhancing Quality of Service (QoS) for users. Furthermore, we utilize convex optimization algorithms to derive the optimal communication bandwidth and computing resource allocation strategy, further reducing the average service latency for users. Simulation results demonstrate that the DTSO2C algorithm outperforms the compared algorithms, achieving an average reduction in service latency of 66.2%, with an average cache update cost of only 0.15 USD per time frame.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103681"},"PeriodicalIF":4.4,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142446670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New routing method based on sticky bacteria algorithm and link stability for VANET 基于粘性细菌算法和链路稳定性的新型 VANET 路由方法
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-10 DOI: 10.1016/j.adhoc.2024.103682
Jie Zhang , Lei Zhang , De-gan Zhang , Ting Zhang , Shuo Wang , Cheng-hui Zou
With the rapid development of Telematics, the role of edge computing (Mobile Edge Computing, MEC) is becoming more and more significant. Mobile users can obtain massive computing and storage resources in a local way, thus effectively solving the congestion problem of the core network. However, in general urban edge network scenarios of VANET (Vehicular Ad-hoc Network), due to low node density and poor connectivity, there are few chances to encounter suitable forwarding nodes. In order to avoid the above situation and adapt to sparse scenarios, we propose new link-stabilized routing method & protocol based on sticky bacteria algorithm in this paper. The new idea and the significant findings of this proposed algorithm in enhancing the routing protocol is as follows: five factors affecting routing decision are identified: evaluation distance, deflection angle, number of neighboring nodes, rate difference, and the traffic of the road section at which it is located, and then these five factors are comprehensively evaluated by using the our designed Analytic Hierarchy Process (AHP) strategy, so as to determine the score of each node and the candidate routing paths; And the best routing path between two communicating nodes is found through the stronger global exploration capability of the sticky bacteria algorithm. Our experiments (that are based on our developed C++ programs) have proved that the method proposed in this paper has stronger link stability and the highest packet delivery rate. And the experiments and their results has enhanced our new method credibility in the field of ad hoc networks.
随着车联网的快速发展,边缘计算(Mobile Edge Computing,MEC)的作用越来越重要。移动用户可以在本地获得海量计算和存储资源,从而有效解决核心网络的拥塞问题。然而,在 VANET(车载 Ad-hoc 网络)的一般城市边缘网络场景中,由于节点密度低、连通性差,遇到合适转发节点的机会很少。为了避免上述情况,适应稀疏场景,本文提出了新的链路稳定路由方法 & 协议,基于粘性细菌算法。该算法在增强路由协议方面的新思路和重要发现如下:确定了影响路由决策的五个因素:然后利用我们设计的层次分析法(AHP)策略对这五个因素进行综合评价,从而确定每个节点和候选路由路径的得分;并通过粘菌算法更强的全局探索能力,找到两个通信节点之间的最佳路由路径。我们的实验(基于我们开发的 C++ 程序)证明,本文提出的方法具有更强的链路稳定性和最高的数据包交付率。实验及其结果提高了我们的新方法在 ad hoc 网络领域的可信度。
{"title":"New routing method based on sticky bacteria algorithm and link stability for VANET","authors":"Jie Zhang ,&nbsp;Lei Zhang ,&nbsp;De-gan Zhang ,&nbsp;Ting Zhang ,&nbsp;Shuo Wang ,&nbsp;Cheng-hui Zou","doi":"10.1016/j.adhoc.2024.103682","DOIUrl":"10.1016/j.adhoc.2024.103682","url":null,"abstract":"<div><div>With the rapid development of Telematics, the role of edge computing (Mobile Edge Computing, MEC) is becoming more and more significant. Mobile users can obtain massive computing and storage resources in a local way, thus effectively solving the congestion problem of the core network. However, in general urban edge network scenarios of VANET (Vehicular Ad-hoc Network), due to low node density and poor connectivity, there are few chances to encounter suitable forwarding nodes. In order to avoid the above situation and adapt to sparse scenarios, we propose new link-stabilized routing method &amp; protocol based on sticky bacteria algorithm in this paper. The new idea and the significant findings of this proposed algorithm in enhancing the routing protocol is as follows: five factors affecting routing decision are identified: evaluation distance, deflection angle, number of neighboring nodes, rate difference, and the traffic of the road section at which it is located, and then these five factors are comprehensively evaluated by using the our designed Analytic Hierarchy Process (AHP) strategy, so as to determine the score of each node and the candidate routing paths; And the best routing path between two communicating nodes is found through the stronger global exploration capability of the sticky bacteria algorithm. Our experiments (that are based on our developed C++ programs) have proved that the method proposed in this paper has stronger link stability and the highest packet delivery rate. And the experiments and their results has enhanced our new method credibility in the field of ad hoc networks.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103682"},"PeriodicalIF":4.4,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Age of information optimal UAV swarm-assisted sweep coverage in wireless sensor networks 无线传感器网络中最优无人机群辅助清扫覆盖的信息时代
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-07 DOI: 10.1016/j.adhoc.2024.103675
Li Li, Hongbin Chen
For sweep coverage in wireless sensor networks (WSNs), the freshness of data directly affects the efficiency of task execution, and time is needed to continuously cover points of interest (POIs) to ensure obtaining all the data. However, existing studies ignored both. The outdated data and missing data may lead to decision-making errors, resulting in significant losses. To address this issue, this paper proposes a simultaneous sweep mode and a batch sweep mode of Unmanned Aerial Vehicle (UAV) swarm to achieve sweep coverage in WSNs, considering the freshness of data and the continuous coverage time of POIs, where the age of information (AoI) is adopted to measure the freshness of data. The target is to minimize the average AoI of POIs under the continuous coverage time constraint and the constraints of UAV swarm. Firstly, the POIs are clustered to obtain the best sweep points. Then, the UAV swarm sweep coverage (USSC) algorithm is designed for the two sweep modes. Finally, various simulations are conducted to verify the performance of the USSC algorithm. Simulation results show that the USSC algorithm can effectively minimize the average AoI compared to baseline algorithms. The script of the proposed algorithm can be found from: https://github.com/lilibeat/USSC.
对于无线传感器网络(WSN)中的清扫覆盖而言,数据的新鲜度直接影响任务执行的效率,而持续覆盖兴趣点(POI)以确保获得所有数据则需要时间。然而,现有研究忽略了这两方面。过期数据和缺失数据可能会导致决策失误,造成重大损失。针对这一问题,本文提出了无人机(UAV)蜂群实现 WSN 扫掠覆盖的同步扫掠模式和批量扫掠模式,考虑了数据的新鲜度和 POIs 的连续覆盖时间,其中采用信息年龄(AoI)来衡量数据的新鲜度。目标是在连续覆盖时间约束和无人机群约束下,最小化 POI 的平均 AoI。首先,对 POI 进行聚类,以获得最佳清扫点。然后,针对两种清扫模式设计了无人机群清扫覆盖(USSC)算法。最后,通过各种仿真验证了 USSC 算法的性能。仿真结果表明,与基线算法相比,USSC 算法能有效地最小化平均 AoI。建议算法的脚本可从以下网址找到:https://github.com/lilibeat/USSC。
{"title":"Age of information optimal UAV swarm-assisted sweep coverage in wireless sensor networks","authors":"Li Li,&nbsp;Hongbin Chen","doi":"10.1016/j.adhoc.2024.103675","DOIUrl":"10.1016/j.adhoc.2024.103675","url":null,"abstract":"<div><div>For sweep coverage in wireless sensor networks (WSNs), the freshness of data directly affects the efficiency of task execution, and time is needed to continuously cover points of interest (POIs) to ensure obtaining all the data. However, existing studies ignored both. The outdated data and missing data may lead to decision-making errors, resulting in significant losses. To address this issue, this paper proposes a simultaneous sweep mode and a batch sweep mode of Unmanned Aerial Vehicle (UAV) swarm to achieve sweep coverage in WSNs, considering the freshness of data and the continuous coverage time of POIs, where the age of information (AoI) is adopted to measure the freshness of data. The target is to minimize the average AoI of POIs under the continuous coverage time constraint and the constraints of UAV swarm. Firstly, the POIs are clustered to obtain the best sweep points. Then, the UAV swarm sweep coverage (USSC) algorithm is designed for the two sweep modes. Finally, various simulations are conducted to verify the performance of the USSC algorithm. Simulation results show that the USSC algorithm can effectively minimize the average AoI compared to baseline algorithms. <em>The script of the proposed algorithm can be found from:</em> <span><span>https://github.com/lilibeat/USSC</span><svg><path></path></svg></span><em>.</em></div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103675"},"PeriodicalIF":4.4,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cache-aware congestion control mechanism using deep reinforcement learning for wireless sensor networks 使用深度强化学习的无线传感器网络缓存感知拥塞控制机制
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-04 DOI: 10.1016/j.adhoc.2024.103678
Melchizedek Alipio , Miroslav Bures
In Wireless Sensor Networks (WSN) communication protocols, rule-based approaches have been traditionally used for managing caching and congestion control. These approaches rely on explicitly defined, unchanging models. Recently, a trend has been toward incorporating adaptive methods that leverage machine learning (ML), including its subset deep learning (DL), during network congestion conditions. However, an adaptive cache-aware congestion control mechanism using Deep Reinforcement Learning (DRL) in WSN has not yet been explored. Therefore, this study developed a DRL-based adaptive cache-aware congestion control mechanism called DRL-CaCC to alleviate WSN during congestion scenarios. The DRL-CaCC uses intermediate caching parameters as its state space and adaptively moves the congestion window as its action space through the Rapid Start and DRL algorithms. The mechanism aims to find the optimal congestion window movement to avoid further network congestion while ensuring maximum cache utilization. Results show that DRL-CaCC achieved an average improvement gain between 20% and 40% compared to its baseline protocol, RT-CaCC. Finally, DRL-CaCC outperformed other caching-based and DRL-based congestion control protocols in terms of cache utilization, throughput, end-to-end delay, and packet loss metrics, with improvement gains between 10% and 30% in various congestion scenarios in WSN.
在无线传感器网络(WSN)通信协议中,基于规则的方法一直被用于管理缓存和拥塞控制。这些方法依赖于明确定义的不变模型。最近,一种趋势是在网络拥塞条件下采用利用机器学习(ML)(包括其子集深度学习(DL))的自适应方法。然而,在 WSN 中使用深度强化学习(DRL)的自适应缓存感知拥塞控制机制尚未得到探索。因此,本研究开发了一种基于 DRL 的自适应缓存感知拥塞控制机制 DRL-CaCC,以缓解 WSN 在拥塞情况下的拥塞问题。DRL-CaCC 使用中间缓存参数作为其状态空间,并通过快速启动和 DRL 算法自适应地移动拥塞窗口作为其行动空间。该机制旨在找到最佳的拥塞窗口移动方式,以避免进一步的网络拥塞,同时确保最大的缓存利用率。结果表明,与基线协议 RT-CaCC 相比,DRL-CaCC 平均提高了 20% 到 40%。最后,在 WSN 的各种拥塞情况下,DRL-CaCC 在缓存利用率、吞吐量、端到端延迟和数据包丢失指标方面都优于其他基于缓存和 DRL 的拥塞控制协议,改进收益在 10% 到 30% 之间。
{"title":"A cache-aware congestion control mechanism using deep reinforcement learning for wireless sensor networks","authors":"Melchizedek Alipio ,&nbsp;Miroslav Bures","doi":"10.1016/j.adhoc.2024.103678","DOIUrl":"10.1016/j.adhoc.2024.103678","url":null,"abstract":"<div><div>In Wireless Sensor Networks (WSN) communication protocols, rule-based approaches have been traditionally used for managing caching and congestion control. These approaches rely on explicitly defined, unchanging models. Recently, a trend has been toward incorporating adaptive methods that leverage machine learning (ML), including its subset deep learning (DL), during network congestion conditions. However, an adaptive cache-aware congestion control mechanism using Deep Reinforcement Learning (DRL) in WSN has not yet been explored. Therefore, this study developed a DRL-based adaptive cache-aware congestion control mechanism called DRL-CaCC to alleviate WSN during congestion scenarios. The DRL-CaCC uses intermediate caching parameters as its state space and adaptively moves the congestion window as its action space through the Rapid Start and DRL algorithms. The mechanism aims to find the optimal congestion window movement to avoid further network congestion while ensuring maximum cache utilization. Results show that DRL-CaCC achieved an average improvement gain between 20% and 40% compared to its baseline protocol, RT-CaCC. Finally, DRL-CaCC outperformed other caching-based and DRL-based congestion control protocols in terms of cache utilization, throughput, end-to-end delay, and packet loss metrics, with improvement gains between 10% and 30% in various congestion scenarios in WSN.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103678"},"PeriodicalIF":4.4,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-optimized elliptic curve with Certificate-Less Digital Signature for zero trust maritime security 人工智能优化的椭圆曲线与无证书数字签名,实现零信任的海事安全
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-03 DOI: 10.1016/j.adhoc.2024.103669
Mohammed Al-Khalidi , Rabab Al-Zaidi , Tarek Ali , Safiullah Khan , Ali Kashif Bashir
The proliferation of sensory applications has led to the development of the Internet of Things (IoT), which extends connectivity beyond traditional computing platforms and connects all kinds of everyday objects. Marine Ad Hoc Networks are expected to be an essential part of this connected world, forming the Internet of Marine Things (IoMaT). However, marine IoT systems are often highly distributed, and spread across large sparse areas which makes it challenging to implement and manage centralized security measures. Despite some ongoing efforts to establish network connectivity in such environment, securing these networks remains an unreached goal. The use of Certificate-Less Digital Signatures (CLDS) with Elliptic Curve Cryptography (ECC) shows great promise in providing secure communication in these networks and achieving zero trust IoMaT security. By eliminating the need for certificates and associated key management infrastructure, CLDS simplifies the key management process. ECC also enables secure communication with smaller key sizes and faster processing times, which is crucial for resource-limited IoMaT devices. In this paper, we introduce CLDS using ECC as a means of securing IoT networks in a marine environment, creating a zero trust security framework for Internet of Marine Things (IoMaT). To increase security and robustness of the framework, we optimize the ECC parameters using two vital artificial intelligence algorithms, namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). Evaluation results demonstrate a reduction in ECC parameter generation time by over 40% with GA optimization and 20% with PSO optimization. Additionally, the computational cost and memory usage for major ECC attacks increased significantly by up to 40% and 67% for Rho attacks, 34% and 53% for brute-force attacks, and 30% and 67% for improved hybrid attacks, respectively.
感知应用的普及促进了物联网 (IoT) 的发展,它将连接性扩展到传统计算平台之外,并将各种日常物品连接起来。海洋 Ad Hoc 网络有望成为这个互联世界的重要组成部分,形成海洋物联网 (IoMaT)。然而,海洋物联网系统通常高度分散,分布在大片稀疏的区域,这给实施和管理集中式安全措施带来了挑战。尽管目前正在努力在这种环境中建立网络连接,但确保这些网络的安全仍然是一个遥不可及的目标。无证书数字签名(CLDS)和椭圆曲线加密法(ECC)的使用为在这些网络中提供安全通信和实现零信任 IoMaT 安全带来了巨大希望。通过消除对证书和相关密钥管理基础设施的需求,CLDS 简化了密钥管理流程。ECC 还能以更小的密钥规模和更快的处理时间实现安全通信,这对于资源有限的物联网设备来说至关重要。在本文中,我们介绍了使用 ECC 的 CLDS,将其作为确保海洋环境中物联网网络安全的一种手段,为海洋物联网(IoMaT)创建了一个零信任安全框架。为了提高该框架的安全性和鲁棒性,我们使用两种重要的人工智能算法(即遗传算法(GA)和粒子群优化(PSO))优化了 ECC 参数。评估结果表明,利用遗传算法优化和粒子群优化,ECC 参数生成时间分别缩短了 40% 和 20%。此外,主要 ECC 攻击的计算成本和内存使用量大幅增加,Rho 攻击分别增加了 40% 和 67%,暴力破解攻击分别增加了 34% 和 53%,改进型混合攻击分别增加了 30% 和 67%。
{"title":"AI-optimized elliptic curve with Certificate-Less Digital Signature for zero trust maritime security","authors":"Mohammed Al-Khalidi ,&nbsp;Rabab Al-Zaidi ,&nbsp;Tarek Ali ,&nbsp;Safiullah Khan ,&nbsp;Ali Kashif Bashir","doi":"10.1016/j.adhoc.2024.103669","DOIUrl":"10.1016/j.adhoc.2024.103669","url":null,"abstract":"<div><div>The proliferation of sensory applications has led to the development of the Internet of Things (IoT), which extends connectivity beyond traditional computing platforms and connects all kinds of everyday objects. Marine Ad Hoc Networks are expected to be an essential part of this connected world, forming the Internet of Marine Things (IoMaT). However, marine IoT systems are often highly distributed, and spread across large sparse areas which makes it challenging to implement and manage centralized security measures. Despite some ongoing efforts to establish network connectivity in such environment, securing these networks remains an unreached goal. The use of Certificate-Less Digital Signatures (CLDS) with Elliptic Curve Cryptography (ECC) shows great promise in providing secure communication in these networks and achieving zero trust IoMaT security. By eliminating the need for certificates and associated key management infrastructure, CLDS simplifies the key management process. ECC also enables secure communication with smaller key sizes and faster processing times, which is crucial for resource-limited IoMaT devices. In this paper, we introduce CLDS using ECC as a means of securing IoT networks in a marine environment, creating a zero trust security framework for Internet of Marine Things (IoMaT). To increase security and robustness of the framework, we optimize the ECC parameters using two vital artificial intelligence algorithms, namely Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). Evaluation results demonstrate a reduction in ECC parameter generation time by over 40% with GA optimization and 20% with PSO optimization. Additionally, the computational cost and memory usage for major ECC attacks increased significantly by up to 40% and 67% for Rho attacks, 34% and 53% for brute-force attacks, and 30% and 67% for improved hybrid attacks, respectively.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103669"},"PeriodicalIF":4.4,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Residual multiscale attention based modulated convolutional neural network for radio link failure prediction in 5G 基于残差多尺度注意力的调制卷积神经网络用于 5G 无线链路故障预测
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-01 DOI: 10.1016/j.adhoc.2024.103679
Ranjitham Govindasamy , Sathish Kumar Nagarajan , Jamuna Rani Muthu , M. Ramkumar
In the realm of the 5 G environment, Radio Access Networks (RANs) are integral components, comprising radio base stations communicating through wireless radio links. However, this communication is susceptible to environmental variations, particularly weather conditions, leading to potential radio link failures that disrupt services. Addressing this, proactive failure prediction and resource allocation adjustments become crucial. Existing approaches neglect the relationship between weather changes and radio communication, lacking a holistic view despite their effectiveness in predicting radio link failures for one day. Therefore, the Dynamic Arithmetic Residual Multiscale attention-based Modulated Convolutional Neural Network (DARMMCNN) is proposed. This innovative model considers radio link data and weather changes as key metrics for predicting link failures. Notably, the proposed approach extends the prediction span to 5 days, surpassing the limitations of existing one-day prediction methods. In this, input data is collected from the Radio Link Failure (RLF) prediction dataset. Then, the distance correlation and noise elimination are used to improve the quality and relevance of the data. Following that, the sooty tern optimization algorithm is used for feature selection, which contributes to link failures. Next, a multiscale residual attention modulated convolutional neural network is applied for RLF prediction, and a dynamic arithmetic optimization algorithm is accomplished to tune the weight parameter of the network. The proposed work obtains 79.03 %, 65.93 %, and 67.51 % of precision, recall, and F1-score, which are better than existing techniques. The analysis shows that the proposed scheme is appropriate for RLF prediction.
在 5 G 环境领域,无线接入网(RAN)是不可或缺的组成部分,由通过无线射频链路进行通信的无线基站组成。然而,这种通信很容易受到环境变化的影响,尤其是天气条件,从而导致潜在的无线链路故障,中断服务。为此,主动预测故障和调整资源分配变得至关重要。现有的方法忽视了天气变化与无线电通信之间的关系,尽管能有效预测一天内的无线电链路故障,但缺乏全局观念。因此,我们提出了基于注意力的动态算术残差多尺度调制卷积神经网络(DARMMCNN)。这一创新模型将无线电链路数据和天气变化作为预测链路故障的关键指标。值得注意的是,所提出的方法将预测跨度延长至 5 天,超越了现有单日预测方法的局限性。其中,输入数据来自无线电链路故障(RLF)预测数据集。然后,利用距离相关性和噪声消除来提高数据的质量和相关性。然后,使用 Sooty tern 优化算法选择导致链路故障的特征。接着,将多尺度残差注意调制卷积神经网络用于 RLF 预测,并通过动态算术优化算法来调整网络的权重参数。该方案的精确度、召回率和 F1 分数分别为 79.03%、65.93% 和 67.51%,均优于现有技术。分析表明,提出的方案适用于 RLF 预测。
{"title":"Residual multiscale attention based modulated convolutional neural network for radio link failure prediction in 5G","authors":"Ranjitham Govindasamy ,&nbsp;Sathish Kumar Nagarajan ,&nbsp;Jamuna Rani Muthu ,&nbsp;M. Ramkumar","doi":"10.1016/j.adhoc.2024.103679","DOIUrl":"10.1016/j.adhoc.2024.103679","url":null,"abstract":"<div><div>In the realm of the 5 G environment, Radio Access Networks (RANs) are integral components, comprising radio base stations communicating through wireless radio links. However, this communication is susceptible to environmental variations, particularly weather conditions, leading to potential radio link failures that disrupt services. Addressing this, proactive failure prediction and resource allocation adjustments become crucial. Existing approaches neglect the relationship between weather changes and radio communication, lacking a holistic view despite their effectiveness in predicting radio link failures for one day. Therefore, the Dynamic Arithmetic Residual Multiscale attention-based Modulated Convolutional Neural Network (DARMMCNN) is proposed. This innovative model considers radio link data and weather changes as key metrics for predicting link failures. Notably, the proposed approach extends the prediction span to 5 days, surpassing the limitations of existing one-day prediction methods. In this, input data is collected from the Radio Link Failure (RLF) prediction dataset. Then, the distance correlation and noise elimination are used to improve the quality and relevance of the data. Following that, the sooty tern optimization algorithm is used for feature selection, which contributes to link failures. Next, a multiscale residual attention modulated convolutional neural network is applied for RLF prediction, and a dynamic arithmetic optimization algorithm is accomplished to tune the weight parameter of the network. The proposed work obtains 79.03 %, 65.93 %, and 67.51 % of precision, recall, and F1-score, which are better than existing techniques. The analysis shows that the proposed scheme is appropriate for RLF prediction.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103679"},"PeriodicalIF":4.4,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-efficient hierarchical cluster-based routing strategies for Internet of Nano-Things: Algorithms design and experimental evaluations 纳米物联网的高能效分层集群路由策略:算法设计与实验评估
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-28 DOI: 10.1016/j.adhoc.2024.103673
Emre Sahin , Orhan Dagdeviren , Mustafa Alper Akkas
Nanodevices (NDs), which are only a few nanometers (nm) in size, need to communicate with each other to perform complex operations. In nanonetworks, this communication typically involves multiple hops, requiring efficient routing protocols. Existing protocols are not well suited for nanonetworks due to their high resource consumption and setup overhead. In this paper, we propose three novel routing protocols for nanodevices. Non-Back Flooding Routing (NBFR) and Layer-Based Flooding Routing (LBFR) aim to reduce unnecessary packet transmission by utilizing distance and layer information based on received signal power. On the other hand, Tree-Based Forwarding Routing (TBFR) is a unicast-based approach that aims to transmit the packet to the destination using the shortest and most reliable path possible through a tree structure. The performance of these proposed methods is compared to well-known methods in terms of packet transmission, energy consumption, end-to-end delay, and setup overhead. TBFR achieved a packet transmission success of 92.95% in topology with the highest density of nanorouters (NRs), while it reached up to 99.57% for fewer nanorouters. Moreover, its end-to-end delay values are much lower than those of multi-path routing protocols. It also consumed one-fifth of the energy compared to its most challenging multi-path competitor, NBFR, regarding packet transmission success. However, for dense nanosensor (NS) topologies, NBFR and LBFR achieved higher packet transmission rates of 87.04% and 86.66%, respectively. Furthermore, in addition to achieving low end-to-end delays, the energy consumption of NBFR is very close to that of TBFR. In summary, the tests show that TBFR is more suitable for communication among nanorouters due to the requirement of building the tree structure, which results in a slightly higher setup overhead. In contrast, NBFR and LBFR are more suitable for communication between nanosensors because of their simplicity and low setup overhead. But, it should be noted that NBFR requires a larger header than the other alternatives.
尺寸只有几纳米(nm)的纳米器件(ND)需要相互通信以执行复杂的操作。在纳米网络中,这种通信通常涉及多个跳数,需要高效的路由协议。现有的协议由于资源消耗大、设置开销高而不太适合纳米网络。在本文中,我们为纳米设备提出了三种新型路由协议。非反向泛洪路由(NBFR)和基于层的泛洪路由(LBFR)旨在通过利用基于接收信号功率的距离和层信息来减少不必要的数据包传输。另一方面,基于树的转发路由(TBFR)是一种基于单播的方法,旨在通过树状结构,利用最短、最可靠的路径将数据包传输到目的地。在数据包传输、能耗、端到端延迟和设置开销等方面,这些建议方法的性能与知名方法进行了比较。在纳米路由器(NR)密度最高的拓扑结构中,TBFR 的数据包传输成功率达到 92.95%,而在纳米路由器较少的情况下,成功率则高达 99.57%。此外,它的端到端延迟值远远低于多路径路由协议。在数据包传输成功率方面,与最具挑战性的多路径竞争对手 NBFR 相比,它的能耗也只有后者的五分之一。然而,对于密集纳米传感器(NS)拓扑结构,NBFR 和 LBFR 的数据包传输率更高,分别为 87.04% 和 86.66%。此外,除了实现较低的端到端延迟外,NBFR 的能耗与 TBFR 非常接近。总之,测试结果表明,TBFR 更适合纳米路由器之间的通信,因为它需要构建树形结构,导致设置开销略高。相比之下,NBFR 和 LBFR 更适合纳米传感器之间的通信,因为它们简单且设置开销低。但需要注意的是,NBFR 需要比其他方案更大的头。
{"title":"Energy-efficient hierarchical cluster-based routing strategies for Internet of Nano-Things: Algorithms design and experimental evaluations","authors":"Emre Sahin ,&nbsp;Orhan Dagdeviren ,&nbsp;Mustafa Alper Akkas","doi":"10.1016/j.adhoc.2024.103673","DOIUrl":"10.1016/j.adhoc.2024.103673","url":null,"abstract":"<div><div>Nanodevices (NDs), which are only a few nanometers (nm) in size, need to communicate with each other to perform complex operations. In nanonetworks, this communication typically involves multiple hops, requiring efficient routing protocols. Existing protocols are not well suited for nanonetworks due to their high resource consumption and setup overhead. In this paper, we propose three novel routing protocols for nanodevices. Non-Back Flooding Routing (NBFR) and Layer-Based Flooding Routing (LBFR) aim to reduce unnecessary packet transmission by utilizing distance and layer information based on received signal power. On the other hand, Tree-Based Forwarding Routing (TBFR) is a unicast-based approach that aims to transmit the packet to the destination using the shortest and most reliable path possible through a tree structure. The performance of these proposed methods is compared to well-known methods in terms of packet transmission, energy consumption, end-to-end delay, and setup overhead. TBFR achieved a packet transmission success of 92.95% in topology with the highest density of nanorouters (NRs), while it reached up to 99.57% for fewer nanorouters. Moreover, its end-to-end delay values are much lower than those of multi-path routing protocols. It also consumed one-fifth of the energy compared to its most challenging multi-path competitor, NBFR, regarding packet transmission success. However, for dense nanosensor (NS) topologies, NBFR and LBFR achieved higher packet transmission rates of 87.04% and 86.66%, respectively. Furthermore, in addition to achieving low end-to-end delays, the energy consumption of NBFR is very close to that of TBFR. In summary, the tests show that TBFR is more suitable for communication among nanorouters due to the requirement of building the tree structure, which results in a slightly higher setup overhead. In contrast, NBFR and LBFR are more suitable for communication between nanosensors because of their simplicity and low setup overhead. But, it should be noted that NBFR requires a larger header than the other alternatives.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103673"},"PeriodicalIF":4.4,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A privacy-preserving Self-Supervised Learning-based intrusion detection system for 5G-V2X networks 面向 5G-V2X 网络的基于自监督学习的隐私保护型入侵检测系统
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-28 DOI: 10.1016/j.adhoc.2024.103674
Shajjad Hossain , Sidi-Mohammed Senouci , Bouziane Brik , Abdelwahab Boualouache
In light of the ongoing transformation in the automotive industry, driven by the adoption of 5G and the proliferation of connected vehicles, network security has emerged as a critical concern. This is particularly true for the implementation of cutting-edge 5G services such as Network Slicing (NS), Software Defined Networking (SDN), and Multi-access Edge Computing (MEC). As these advanced services become more prevalent, they introduce new vulnerabilities that can be exploited by cyber attackers. Consequently, Network Intrusion Detection Systems (NIDSs) are pivotal in safeguarding vehicular networks against cyber threats. Still, their efficacy hinges on extensive data, which often contains sensitive and confidential information such as vehicle positions and owner’s behaviors, raising privacy concerns. To address this issue, we propose a Privacy-Preserving Self-Supervised Learning (SSL) based Intrusion Detection System for 5G-V2X networks. The majority of works in the literature relying on Federated Learning (FL) and often overlook data labeling on the end devices. Our methodology leverages SSL to pre-train NIDSs using unlabeled data. Post-training is then performed with a minimal amount of labeled data, which can be carefully crafted by an expert. This novel technique allows the training of NIDSs with huge datasets without compromising privacy, consequently enhancing the efficacy of cyber-attack protection. Our innovative SSL pre-training methodology has yielded remarkable results, demonstrating a substantial improvement of up to 9% in accuracy across a diverse range of training dataset sizes, including scenarios with as few as 200 data samples. Our approach highlights the potential to enhance automotive network security significantly, showcasing groundbreaking achievements that set a new standard in the field of automotive cybersecurity.
随着 5G 的采用和联网汽车的普及,汽车行业正在发生变革,网络安全已成为人们关注的一个重要问题。在实施网络切片 (NS)、软件定义网络 (SDN) 和多接入边缘计算 (MEC) 等尖端 5G 服务时尤其如此。随着这些先进服务的普及,它们会带来新的漏洞,从而被网络攻击者利用。因此,网络入侵检测系统(NIDS)在保护车辆网络免受网络威胁方面至关重要。然而,它们的功效取决于大量数据,而这些数据往往包含敏感和机密信息,如车辆位置和车主行为,从而引发了隐私问题。为解决这一问题,我们提出了一种基于隐私保护自监督学习(SSL)的 5G-V2X 网络入侵检测系统。文献中的大多数作品都依赖于联合学习(FL),但往往忽略了终端设备上的数据标签。我们的方法利用 SSL,使用未标记的数据对 NIDS 进行预训练。然后,使用专家精心制作的极少量标记数据进行后期训练。这种新颖的技术可以在不损害隐私的情况下,使用海量数据集对 NIDS 进行训练,从而提高网络攻击防护的效率。我们创新的 SSL 预训练方法取得了显著的成果,在不同规模的训练数据集(包括只有 200 个数据样本的场景)中,准确率大幅提高了 9%。我们的方法凸显了显著增强汽车网络安全的潜力,展示了开创性的成就,为汽车网络安全领域树立了新标准。
{"title":"A privacy-preserving Self-Supervised Learning-based intrusion detection system for 5G-V2X networks","authors":"Shajjad Hossain ,&nbsp;Sidi-Mohammed Senouci ,&nbsp;Bouziane Brik ,&nbsp;Abdelwahab Boualouache","doi":"10.1016/j.adhoc.2024.103674","DOIUrl":"10.1016/j.adhoc.2024.103674","url":null,"abstract":"<div><div>In light of the ongoing transformation in the automotive industry, driven by the adoption of 5G and the proliferation of connected vehicles, network security has emerged as a critical concern. This is particularly true for the implementation of cutting-edge 5G services such as Network Slicing (NS), Software Defined Networking (SDN), and Multi-access Edge Computing (MEC). As these advanced services become more prevalent, they introduce new vulnerabilities that can be exploited by cyber attackers. Consequently, Network Intrusion Detection Systems (NIDSs) are pivotal in safeguarding vehicular networks against cyber threats. Still, their efficacy hinges on extensive data, which often contains sensitive and confidential information such as vehicle positions and owner’s behaviors, raising privacy concerns. To address this issue, we propose a Privacy-Preserving Self-Supervised Learning (SSL) based Intrusion Detection System for 5G-V2X networks. The majority of works in the literature relying on Federated Learning (FL) and often overlook data labeling on the end devices. Our methodology leverages SSL to pre-train NIDSs using unlabeled data. Post-training is then performed with a minimal amount of labeled data, which can be carefully crafted by an expert. This novel technique allows the training of NIDSs with huge datasets without compromising privacy, consequently enhancing the efficacy of cyber-attack protection. Our innovative SSL pre-training methodology has yielded remarkable results, demonstrating a substantial improvement of up to 9% in accuracy across a diverse range of training dataset sizes, including scenarios with as few as 200 data samples. Our approach highlights the potential to enhance automotive network security significantly, showcasing groundbreaking achievements that set a new standard in the field of automotive cybersecurity.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103674"},"PeriodicalIF":4.4,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-agent reinforcement learning for task offloading with hybrid decision space in multi-access edge computing 在多接入边缘计算中利用混合决策空间进行任务卸载的多代理强化学习
IF 4.4 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-24 DOI: 10.1016/j.adhoc.2024.103671
Ji Wang, Miao Zhang, Quanjun Yin, Lujia Yin, Yong Peng
Multi-access Edge Computing (MEC) has become a significant technology for supporting the computation-intensive and time-sensitive applications on the Internet of Things (IoT) devices. However, it is challenging to jointly optimize task offloading and resource allocation in the dynamic wireless environment with constrained edge resource. In this paper, we investigate a multi-user and multi-MEC servers system with varying task request and stochastic channel condition. Our purpose is to minimize the total energy consumption and time delay by optimizing the offloading decision, offloading ratio and computing resource allocation simultaneously. As the users are geographically distributed within an area, we formulate the problem of task offloading and resource allocation in MEC system as a partially observable Markov decision process (POMDP) and propose a novel multi-agent deep reinforcement learning (MADRL) -based algorithm to solve it. In particular, two aspects have been modified for performance enhancement: (1) To make fine-grained control, we design a novel neural network structure to effectively handle the hybrid action space arisen by the heterogeneous variables. (2) An adaptive reward mechanism is proposed to reasonably evaluate the infeasible actions and to mitigate the instability caused by manual configuration. Simulation results show the proposed method can achieve 7.12%20.97% performance enhancements compared with the existing approaches.
多接入边缘计算(MEC)已成为支持物联网(IoT)设备上计算密集型和时间敏感型应用的一项重要技术。然而,在边缘资源受限的动态无线环境中联合优化任务卸载和资源分配是一项挑战。在本文中,我们研究了一个具有不同任务请求和随机信道条件的多用户和多 MEC 服务器系统。我们的目的是通过同时优化卸载决策、卸载率和计算资源分配,最大限度地减少总能耗和时间延迟。由于用户在地理上分布在一个区域内,我们将 MEC 系统中的任务卸载和资源分配问题表述为部分可观测马尔可夫决策过程(POMDP),并提出了一种基于多代理深度强化学习(MADRL)的新型算法来解决该问题。为了提高算法的性能,我们在两个方面进行了改进:(1)为了实现精细控制,我们设计了一种新型的神经网络结构,以有效处理由异构变量产生的混合行动空间。(2) 我们提出了一种自适应奖励机制,以合理评估不可行的行动,并缓解手动配置造成的不稳定性。仿真结果表明,与现有方法相比,所提出的方法可实现 7.12%-20.97% 的性能提升。
{"title":"Multi-agent reinforcement learning for task offloading with hybrid decision space in multi-access edge computing","authors":"Ji Wang,&nbsp;Miao Zhang,&nbsp;Quanjun Yin,&nbsp;Lujia Yin,&nbsp;Yong Peng","doi":"10.1016/j.adhoc.2024.103671","DOIUrl":"10.1016/j.adhoc.2024.103671","url":null,"abstract":"<div><div>Multi-access Edge Computing (MEC) has become a significant technology for supporting the computation-intensive and time-sensitive applications on the Internet of Things (IoT) devices. However, it is challenging to jointly optimize task offloading and resource allocation in the dynamic wireless environment with constrained edge resource. In this paper, we investigate a multi-user and multi-MEC servers system with varying task request and stochastic channel condition. Our purpose is to minimize the total energy consumption and time delay by optimizing the offloading decision, offloading ratio and computing resource allocation simultaneously. As the users are geographically distributed within an area, we formulate the problem of task offloading and resource allocation in MEC system as a partially observable Markov decision process (POMDP) and propose a novel multi-agent deep reinforcement learning (MADRL) -based algorithm to solve it. In particular, two aspects have been modified for performance enhancement: (1) To make fine-grained control, we design a novel neural network structure to effectively handle the hybrid action space arisen by the heterogeneous variables. (2) An adaptive reward mechanism is proposed to reasonably evaluate the infeasible actions and to mitigate the instability caused by manual configuration. Simulation results show the proposed method can achieve <span><math><mrow><mn>7</mn><mo>.</mo><mn>12</mn><mtext>%</mtext><mo>−</mo><mn>20</mn><mo>.</mo><mn>97</mn><mtext>%</mtext></mrow></math></span> performance enhancements compared with the existing approaches.</div></div>","PeriodicalId":55555,"journal":{"name":"Ad Hoc Networks","volume":"166 ","pages":"Article 103671"},"PeriodicalIF":4.4,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ad Hoc Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1