首页 > 最新文献

IEEE Transactions on Network Science and Engineering最新文献

英文 中文
Joint Task Allocation and Trajectory Optimization for Multi-UAV Collaborative Air–Ground Edge Computing 多无人机空地边缘协同计算的联合任务分配和轨迹优化
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-15 DOI: 10.1109/TNSE.2024.3481061
Peng Qin;Jinghan Li;Jing Zhang;Yang Fu
With the proliferation of Internet of Things (IoT), compute-intensive and latency-critical applications continue to emerge. However, IoT devices in isolated locations have insufficient energy storage as well as computing resources and may fall outside the service range of ground communication networks. To overcome the constraints of communication coverage and terminal resource, this paper proposes a multiple Unmanned Aerial Vehicle (UAV)-assisted air-ground collaborative edge computing network model, which comprises associated UAVs, auxiliary UAVs, ground user devices (GDs), and base stations (BSs), intending to minimize the overall system energy consumption. It delves into task offloading, UAV trajectory planning and edge resource allocation, which thus is classified as a Mixed-Integer Nonlinear Programming (MINLP) problem. Worse still, the coupling of long-term task queuing delay and short-term offloading decision makes it challenging to address the original issue directly. Therefore, we employ Lyapunov optimization to transform it into two sub-problems. The first involves task offloading for GDs, trajectory optimization for associated UAVs as well as auxiliary UAVs, which is tackled using Deep Reinforcement Learning (DRL), while the second deals with task partitioning and computing resource allocation, which we address via convex optimization. Through numerical simulations, we verify that the proposed approach outperforms other benchmark methods regarding overall system energy consumption.
随着物联网(IoT)的普及,计算密集型和延迟关键型应用不断涌现。然而,偏远地区的物联网设备储能和计算资源不足,可能会超出地面通信网络的服务范围。为了克服通信覆盖和终端资源的限制,本文提出了一种多无人机(UAV)辅助的空地协同边缘计算网络模型,该模型由相关无人机、辅助无人机、地面用户设备(GD)和基站(BS)组成,旨在最大限度地降低整个系统的能耗。它涉及任务卸载、无人机轨迹规划和边缘资源分配,因此被归类为混合整数非线性编程(MINLP)问题。更糟糕的是,长期任务排队延迟和短期卸载决策的耦合使得直接解决原始问题具有挑战性。因此,我们采用 Lyapunov 优化方法将其转化为两个子问题。第一个问题涉及 GD 的任务卸载、相关无人机以及辅助无人机的轨迹优化,我们使用深度强化学习(DRL)来解决;第二个问题涉及任务分区和计算资源分配,我们通过凸优化来解决。通过数值模拟,我们验证了所提出的方法在整体系统能耗方面优于其他基准方法。
{"title":"Joint Task Allocation and Trajectory Optimization for Multi-UAV Collaborative Air–Ground Edge Computing","authors":"Peng Qin;Jinghan Li;Jing Zhang;Yang Fu","doi":"10.1109/TNSE.2024.3481061","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3481061","url":null,"abstract":"With the proliferation of Internet of Things (IoT), compute-intensive and latency-critical applications continue to emerge. However, IoT devices in isolated locations have insufficient energy storage as well as computing resources and may fall outside the service range of ground communication networks. To overcome the constraints of communication coverage and terminal resource, this paper proposes a multiple Unmanned Aerial Vehicle (UAV)-assisted air-ground collaborative edge computing network model, which comprises associated UAVs, auxiliary UAVs, ground user devices (GDs), and base stations (BSs), intending to minimize the overall system energy consumption. It delves into task offloading, UAV trajectory planning and edge resource allocation, which thus is classified as a Mixed-Integer Nonlinear Programming (MINLP) problem. Worse still, the coupling of long-term task queuing delay and short-term offloading decision makes it challenging to address the original issue directly. Therefore, we employ Lyapunov optimization to transform it into two sub-problems. The first involves task offloading for GDs, trajectory optimization for associated UAVs as well as auxiliary UAVs, which is tackled using Deep Reinforcement Learning (DRL), while the second deals with task partitioning and computing resource allocation, which we address via convex optimization. Through numerical simulations, we verify that the proposed approach outperforms other benchmark methods regarding overall system energy consumption.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6231-6243"},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frisbee: An Efficient Data Sharing Framework for UAV Swarms 飞盘无人机群的高效数据共享框架
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-14 DOI: 10.1109/TNSE.2024.3479695
Peipei Chen;Lailong Luo;Deke Guo;Qianzhen Zhang;Xueshan Luo;Bangbang Ren;Yulong Shen
Nowadays, owing to the communication, computation, storage, networking, and sensing abilities, the swarm of unmanned aerial vehicles (UAV) is highly anticipated to be helpful for emergency, disaster, and military situations. Additionally, in such situations, each UAV generates local sensing data with its cameras and sensors. Data sharing in UAV swarm is an urgent need for both users and administrators. For users, they may want to access data stored on any specific UAV on demand. For administrators, they need to construct global information and situational awareness to enable many cooperative applications. This paper makes the first step to tackling this open problem with an efficient data-sharing framework called Frisbee. It first groups all UAVs as a series of cells, each of which has a head-UAV. Inside any cell, all UAVs can communicate with each other directly. Thus, for the intra-cell sharing, Frisbee designs the Dynamic Cuckoo Summary for the head-UAV to accurately index all data inside the cell. For inter-cell sharing, Frisbee designs an effective method to map both the data indices and the head-UAV into a 2-dimensional virtual plane. Based on such virtual plane, a head-UAV communication graph is formed according to the communication range of each head for both data localization and transmission. The comprehensive experiments show that Frisbee achieves 14.7% higher insert throughput, 39.1% lower response delay, and 41.4% less implementation overhead, respectively, compared to the most involved solutions of the ground network.
如今,由于具有通信、计算、存储、联网和传感能力,无人驾驶飞行器(UAV)群有望在紧急情况、灾难和军事情况下发挥作用。此外,在这种情况下,每个无人飞行器都会利用其摄像头和传感器生成本地传感数据。无人机群的数据共享是用户和管理员的迫切需要。对于用户来说,他们可能希望按需访问存储在任何特定无人机上的数据。对于管理员来说,他们需要构建全局信息和态势感知,以实现多种合作应用。本文通过一个名为 Frisbee 的高效数据共享框架迈出了解决这一公开问题的第一步。它首先将所有无人飞行器分组为一系列单元,每个单元都有一个头部无人飞行器。在任何单元内,所有无人飞行器都可以直接相互通信。因此,在小区内共享方面,Frisbee 为头部无人机设计了动态布谷鸟摘要,以准确索引小区内的所有数据。在小区间共享方面,Frisbee 设计了一种有效的方法,将数据索引和头部无人机映射到一个二维虚拟平面上。在此虚拟平面的基础上,根据各机头的通信范围形成机头-无人机通信图,用于数据定位和传输。综合实验结果表明,与地面网络最复杂的方案相比,Frisbee 的插入吞吐量提高了 14.7%,响应延迟降低了 39.1%,执行开销减少了 41.4%。
{"title":"Frisbee: An Efficient Data Sharing Framework for UAV Swarms","authors":"Peipei Chen;Lailong Luo;Deke Guo;Qianzhen Zhang;Xueshan Luo;Bangbang Ren;Yulong Shen","doi":"10.1109/TNSE.2024.3479695","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3479695","url":null,"abstract":"Nowadays, owing to the communication, computation, storage, networking, and sensing abilities, the swarm of unmanned aerial vehicles (UAV) is highly anticipated to be helpful for emergency, disaster, and military situations. Additionally, in such situations, each UAV generates local sensing data with its cameras and sensors. Data sharing in UAV swarm is an urgent need for both users and administrators. For users, they may want to access data stored on any specific UAV on demand. For administrators, they need to construct global information and situational awareness to enable many cooperative applications. This paper makes the first step to tackling this open problem with an efficient data-sharing framework called Frisbee. It first groups all UAVs as a series of cells, each of which has a head-UAV. Inside any cell, all UAVs can communicate with each other directly. Thus, for the intra-cell sharing, Frisbee designs the Dynamic Cuckoo Summary for the head-UAV to accurately index all data inside the cell. For inter-cell sharing, Frisbee designs an effective method to map both the data indices and the head-UAV into a 2-dimensional virtual plane. Based on such virtual plane, a head-UAV communication graph is formed according to the communication range of each head for both data localization and transmission. The comprehensive experiments show that Frisbee achieves 14.7% higher insert throughput, 39.1% lower response delay, and 41.4% less implementation overhead, respectively, compared to the most involved solutions of the ground network.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5380-5393"},"PeriodicalIF":6.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Connectivity-Preserving Formation Control via Clique-Based Approach Without Prior Assignment
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-10 DOI: 10.1109/TNSE.2024.3478174
Jinyong Chen;Rui Zhou;Yunjie Zhang;Bin Di;Guibin Sun
This paper explores information sharing within cliques to enable flexible formation pattern control of networked agents with limited communication range, where each agent is not pre-assigned a fixed point in the pattern and is unaware of the total number of agents. To achieve this, we first present a new representation of formation patterns that enables the agents to reach a consensus on the desired pattern by negotiating formation motion and agent numbers. The problem of continuously assigning each agent a point in the desired pattern is then decomposed into small size problems in terms of $delta$-maximal cliques, which can be solved in a distributed manner. Furthermore, a maximal clique-based formation controller is employed to ensure that the agents converge to the desired pattern while preserving the connectivity of the communication topology. Simulation results demonstrate that the pattern assembly time of seven agents using the proposed algorithm is reduced by 55.1% compared with a state-of-the-art pre-assigned method, and this improvement tends to amplify with an increasing number of agents. In addition, we conduct a physical experiment involving five robots to verify the ability of the proposed algorithm in terms of formation shape assembly, manipulation, and automatic repair.
{"title":"Connectivity-Preserving Formation Control via Clique-Based Approach Without Prior Assignment","authors":"Jinyong Chen;Rui Zhou;Yunjie Zhang;Bin Di;Guibin Sun","doi":"10.1109/TNSE.2024.3478174","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3478174","url":null,"abstract":"This paper explores information sharing within cliques to enable flexible formation pattern control of networked agents with limited communication range, where each agent is not pre-assigned a fixed point in the pattern and is unaware of the total number of agents. To achieve this, we first present a new representation of formation patterns that enables the agents to reach a consensus on the desired pattern by negotiating formation motion and agent numbers. The problem of continuously assigning each agent a point in the desired pattern is then decomposed into small size problems in terms of \u0000<inline-formula><tex-math>$delta$</tex-math></inline-formula>\u0000-maximal cliques, which can be solved in a distributed manner. Furthermore, a maximal clique-based formation controller is employed to ensure that the agents converge to the desired pattern while preserving the connectivity of the communication topology. Simulation results demonstrate that the pattern assembly time of seven agents using the proposed algorithm is reduced by 55.1% compared with a state-of-the-art pre-assigned method, and this improvement tends to amplify with an increasing number of agents. In addition, we conduct a physical experiment involving five robots to verify the ability of the proposed algorithm in terms of formation shape assembly, manipulation, and automatic repair.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5916-5929"},"PeriodicalIF":6.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Energy-Efficient Collaborative Offloading Scheme With Heterogeneous Tasks for Satellite Edge Computing 针对卫星边缘计算的具有异构任务的高能效协作卸载方案
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1109/TNSE.2024.3476968
Changzhen Zhang;Jun Yang
Satellite edge computing (SEC) can offer task computing services to ground users, particularly in areas lacking terrestrial network coverage. Nevertheless, given the limited energy of low earth orbit (LEO) satellites, they cannot be used to process numerous computational tasks. Furthermore, most existing task offloading methods are designed for homogeneous tasks, which obviously cannot meet service requirements of various computational tasks. In this work, we investigate energy-efficient collaborative offloading scheme with heterogeneous tasks for SEC to save energy and improve efficiency. Firstly, by dividing computational tasks into delay-sensitive (DS) and delay-tolerant (DT) tasks, we propose a collaborative service architecture with ground edge, satellite edge, and cloud, where specific task offloading schemes are given for both sparse and dense user scenarios to reduce the energy consumption of LEO satellites. Secondly, to reduce the delay and failure rate of DS tasks, we propose an access threshold strategy for DS tasks to control the queue length and facilitate load balancing among multiple computing platforms. Thirdly, to evaluate the proposed offloading scheme, we develop the continuous-time Markov chain (CTMC) to model the traffic load on computing platforms, and the stationary distribution is solved employing the matrix-geometric method. Finally, numerical results for SEC are presented to validate the effectiveness of the proposed offloading scheme.
卫星边缘计算(SEC)可为地面用户提供任务计算服务,尤其是在缺乏地面网络覆盖的地区。然而,由于低地球轨道(LEO)卫星的能量有限,无法用于处理大量计算任务。此外,现有的任务卸载方法大多是针对同质任务设计的,显然无法满足各种计算任务的服务要求。在这项工作中,我们研究了针对 SEC 的异构任务节能协同卸载方案,以节约能源并提高效率。首先,通过将计算任务划分为延迟敏感(DS)任务和延迟容忍(DT)任务,我们提出了一种由地面边缘、卫星边缘和云组成的协同服务架构,其中针对稀疏和密集用户场景给出了具体的任务卸载方案,以降低低地轨道卫星的能耗。其次,为了减少 DS 任务的延迟和失败率,我们提出了 DS 任务的访问阈值策略,以控制队列长度,促进多个计算平台之间的负载平衡。第三,为了评估所提出的卸载方案,我们开发了连续时间马尔可夫链(CTMC)来模拟计算平台上的流量负载,并采用矩阵几何方法求解了静态分布。最后,我们给出了 SEC 的数值结果,以验证所提卸载方案的有效性。
{"title":"An Energy-Efficient Collaborative Offloading Scheme With Heterogeneous Tasks for Satellite Edge Computing","authors":"Changzhen Zhang;Jun Yang","doi":"10.1109/TNSE.2024.3476968","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3476968","url":null,"abstract":"Satellite edge computing (SEC) can offer task computing services to ground users, particularly in areas lacking terrestrial network coverage. Nevertheless, given the limited energy of low earth orbit (LEO) satellites, they cannot be used to process numerous computational tasks. Furthermore, most existing task offloading methods are designed for homogeneous tasks, which obviously cannot meet service requirements of various computational tasks. In this work, we investigate energy-efficient collaborative offloading scheme with heterogeneous tasks for SEC to save energy and improve efficiency. Firstly, by dividing computational tasks into delay-sensitive (DS) and delay-tolerant (DT) tasks, we propose a collaborative service architecture with ground edge, satellite edge, and cloud, where specific task offloading schemes are given for both sparse and dense user scenarios to reduce the energy consumption of LEO satellites. Secondly, to reduce the delay and failure rate of DS tasks, we propose an access threshold strategy for DS tasks to control the queue length and facilitate load balancing among multiple computing platforms. Thirdly, to evaluate the proposed offloading scheme, we develop the continuous-time Markov chain (CTMC) to model the traffic load on computing platforms, and the stationary distribution is solved employing the matrix-geometric method. Finally, numerical results for SEC are presented to validate the effectiveness of the proposed offloading scheme.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6396-6407"},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Profit-Driven Optimization for Flexible Server Deployment and Service Placement in Multi-User Mobile Edge Computing Systems 在多用户移动边缘计算系统中加强利润驱动的灵活服务器部署和服务安置优化
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1109/TNSE.2024.3477453
Juan Fang;Shen Wu;Shuaibing Lu;Ziyi Teng;Huijie Chen;Neal N. Xiong
Edge computing has emerged as a promising paradigm to meet the increasing demands of latency-sensitive and computationally intensive applications. In this context, efficient server deployment and service placement are crucial for optimizing performance and increasing platform profit. This paper investigates the problem of server deployment and service placement in a multi-user scenario, aiming to enhance the profit of Mobile Network Operators while considering constraints related to distance thresholds, resource limitations, and connectivity requirements. We demonstrate that this problem is NP-hard. To address it, we propose a two-stage method to decouple the problem. In stage I, server deployment is formulated as a combinatorial optimization problem within the framework of a Markov Decision Process (MDP). We introduce the Server Deployment with Q-learning (SDQ) algorithm to establish a relatively stable server deployment strategy. In stage II, service placement is formulated as a constrained Integer Nonlinear Programming (INLP) problem. We present the Service Placement with Interior Barrier Method (SPIB) and Tree-based Branch-and-Bound (TDB) algorithms and theoretically prove their feasibility. For scenarios where the number of users changes dynamically, we propose the Distance-and-Utilization Balance Algorithm (DUBA). Extensive experiments validate the exceptional performance of our proposed algorithms in enhancing the profit.
为满足对延迟敏感的计算密集型应用日益增长的需求,边缘计算已成为一种大有可为的模式。在这种情况下,高效的服务器部署和服务安置对于优化性能和增加平台利润至关重要。本文研究了多用户场景下的服务器部署和服务放置问题,旨在提高移动网络运营商的利润,同时考虑与距离阈值、资源限制和连接要求相关的约束。我们证明了这一问题的 NP 难度。为了解决这个问题,我们提出了一种分两个阶段解耦的方法。在第一阶段,服务器部署被表述为马尔可夫决策过程(Markov Decision Process,MDP)框架内的组合优化问题。我们引入了服务器部署 Q 学习(SDQ)算法,以建立相对稳定的服务器部署策略。在第二阶段,服务部署被表述为受约束整数非线性编程(INLP)问题。我们提出了带内部障碍法(SPIB)的服务部署算法和基于树的分支与边界(TDB)算法,并从理论上证明了它们的可行性。针对用户数量动态变化的情况,我们提出了距离与利用率平衡算法(DUBA)。大量实验验证了我们提出的算法在提高收益方面的卓越性能。
{"title":"Enhanced Profit-Driven Optimization for Flexible Server Deployment and Service Placement in Multi-User Mobile Edge Computing Systems","authors":"Juan Fang;Shen Wu;Shuaibing Lu;Ziyi Teng;Huijie Chen;Neal N. Xiong","doi":"10.1109/TNSE.2024.3477453","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3477453","url":null,"abstract":"Edge computing has emerged as a promising paradigm to meet the increasing demands of latency-sensitive and computationally intensive applications. In this context, efficient server deployment and service placement are crucial for optimizing performance and increasing platform profit. This paper investigates the problem of server deployment and service placement in a multi-user scenario, aiming to enhance the profit of Mobile Network Operators while considering constraints related to distance thresholds, resource limitations, and connectivity requirements. We demonstrate that this problem is NP-hard. To address it, we propose a two-stage method to decouple the problem. In stage I, server deployment is formulated as a combinatorial optimization problem within the framework of a Markov Decision Process (MDP). We introduce the Server Deployment with Q-learning (SDQ) algorithm to establish a relatively stable server deployment strategy. In stage II, service placement is formulated as a constrained Integer Nonlinear Programming (INLP) problem. We present the Service Placement with Interior Barrier Method (SPIB) and Tree-based Branch-and-Bound (TDB) algorithms and theoretically prove their feasibility. For scenarios where the number of users changes dynamically, we propose the Distance-and-Utilization Balance Algorithm (DUBA). Extensive experiments validate the exceptional performance of our proposed algorithms in enhancing the profit.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6194-6206"},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified $alpha ! - ! eta ! -! kappa ! - ! mu$ Fading Model Based Real-Time Localization on IoT Edge Devices A Unified $alpha!- !eta-!kappa !- !基于衰减模型的物联网边缘设备实时定位
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-09 DOI: 10.1109/TNSE.2024.3478053
Aditya Singh;Syed Danish;Gaurav Prasad;Sudhir Kumar
Wi-Fi-based localization using Received Signal Strength (RSS) is widely adopted due to its cost-effectiveness and ubiquity. However, localization accuracy of RSS-based localization degrades due to random fluctuations from shadowing and multipath fading effects. Existing fading distributions like Rayleigh, $kappa ! - ! mu$, and $alpha$-KMS struggle to capture all factors contributing to fading. In contrast, the $alpha ! - ! eta ! - ! kappa ! - ! mu$ distribution offers the most generalized coverage of fading in literature. However, as fading distributions become more generalized, their computational demands also increases. This results in a trade-off between localization accuracy and complexity, which is undesirable for real-time localization. In this work, we propose a novel localization strategy utilizing the $alpha ! - ! eta ! - ! kappa ! - ! mu$ distribution combined with a novel approximation method that significantly reduces computational overhead while maintaining accuracy. Our proposed strategy effectively mitigates the trade-off between localization accuracy and complexity, outperforming existing state-of-the-art (SOTA) localization techniques on simulated and real-world testbeds. The proposed strategy achieves accurate localization with a speedup of 280 times over non-approximated methods. We validate its feasibility for real-time tasks on low-compute edge device Raspberry Pi Zero W, where it demonstrates fast and accurate localization, making it suitable for real-time edge applications.
利用接收信号强度(RSS)进行的基于 Wi-Fi 的定位因其成本效益高和普遍性而被广泛采用。然而,由于阴影和多径衰落效应造成的随机波动,基于 RSS 的定位精度会下降。现有的衰落分布如瑞利衰落、$kappa !- !mu$和 $alpha$-KMS 都很难捕捉到造成衰落的所有因素。相比之下,$alpha !- !eta !- !Kappa- !mu$ 分布提供了文献中最普遍的衰减覆盖范围。然而,随着衰减分布变得越来越普遍,其计算需求也随之增加。这就导致了定位精度和复杂性之间的权衡,这对于实时定位来说是不可取的。在这项工作中,我们提出了一种利用$alpha 的新型定位策略!- !eta !- !kappa !- !mu$ 分布与一种新颖的近似方法相结合,在保持精度的同时显著降低了计算开销。我们提出的策略有效地缓解了定位精度和复杂性之间的权衡,在模拟和真实世界测试平台上的表现优于现有的最先进(SOTA)定位技术。与非近似方法相比,所提出的策略实现了精确定位,速度提高了 280 倍。我们在低计算能力的边缘设备 Raspberry Pi Zero W 上验证了该策略在实时任务中的可行性,它展示了快速准确的定位,使其适用于实时边缘应用。
{"title":"A Unified $alpha ! - ! eta ! -! kappa ! - ! mu$ Fading Model Based Real-Time Localization on IoT Edge Devices","authors":"Aditya Singh;Syed Danish;Gaurav Prasad;Sudhir Kumar","doi":"10.1109/TNSE.2024.3478053","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3478053","url":null,"abstract":"Wi-Fi-based localization using Received Signal Strength (RSS) is widely adopted due to its cost-effectiveness and ubiquity. However, localization accuracy of RSS-based localization degrades due to random fluctuations from shadowing and multipath fading effects. Existing fading distributions like Rayleigh, \u0000<inline-formula><tex-math>$kappa ! - ! mu$</tex-math></inline-formula>\u0000, and \u0000<inline-formula><tex-math>$alpha$</tex-math></inline-formula>\u0000-KMS struggle to capture all factors contributing to fading. In contrast, the \u0000<inline-formula><tex-math>$alpha ! - ! eta ! - ! kappa ! - ! mu$</tex-math></inline-formula>\u0000 distribution offers the most generalized coverage of fading in literature. However, as fading distributions become more generalized, their computational demands also increases. This results in a trade-off between localization accuracy and complexity, which is undesirable for real-time localization. In this work, we propose a novel localization strategy utilizing the \u0000<inline-formula><tex-math>$alpha ! - ! eta ! - ! kappa ! - ! mu$</tex-math></inline-formula>\u0000 distribution combined with a novel approximation method that significantly reduces computational overhead while maintaining accuracy. Our proposed strategy effectively mitigates the trade-off between localization accuracy and complexity, outperforming existing state-of-the-art (SOTA) localization techniques on simulated and real-world testbeds. The proposed strategy achieves accurate localization with a speedup of 280 times over non-approximated methods. We validate its feasibility for real-time tasks on low-compute edge device Raspberry Pi Zero W, where it demonstrates fast and accurate localization, making it suitable for real-time edge applications.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6207-6218"},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Split Offloading and Trajectory Scheduling for UAV-Enabled Mobile Edge Computing in IoT Network 物联网网络中无人机移动边缘计算的联合分割卸载和轨迹调度
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-08 DOI: 10.1109/TNSE.2024.3476168
Yunkai Wei;Zikang Wan;Yinan Xiao;Supeng Leng;Kezhi Wang;Kun Yang
Unmanned Aerial Vehicles (UAV) can provide mobile edge computing (MEC) service for resource-limited devices in Internet of Things (IoT). In such scenario, partial offloading can be used to balance the computing task between the UAV and the IoT devices for higher efficiency. However, traditional partial offloading is not suitable for training deep neural network (DNN), since DNN models cannot be portioned with a continuous ratio. In this paper, we introduce a split offloading scheme, which can flexibly split the DNN training task into two parts based on the DNN layers, and allocate them to the IoT device and UAV respectively. We present a scheme to synchronize the training and communicating period of DNN layers in the UAV and IoT device, and thus reduce the model training time. Based on this scheme, an optimization model is proposed to minimize the UAV energy consumption, which jointly optimizes the UAV trajectory, the DNN split position and the service time scheduling. We divide the problem into two subproblems and solve it with an iterative solution. Simulation results show the proposed scheme can reduce the model training time and the UAV energy consumption by up to 25% and 14.4% compared with benchmark schemes, respectively.
无人飞行器(UAV)可以为物联网(IoT)中资源有限的设备提供移动边缘计算(MEC)服务。在这种情况下,部分卸载可用于平衡无人机和物联网设备之间的计算任务,以提高效率。然而,传统的部分卸载并不适用于深度神经网络(DNN)的训练,因为 DNN 模型无法以连续的比例进行分配。本文介绍了一种拆分卸载方案,可根据 DNN 层数将 DNN 训练任务灵活拆分成两部分,分别分配给物联网设备和无人机。我们提出了一种方案,使无人机和物联网设备中 DNN 层的训练和通信时间同步,从而缩短了模型训练时间。在此方案的基础上,我们提出了一个优化模型来最小化无人机能耗,该模型联合优化了无人机轨迹、DNN 分割位置和服务时间调度。我们将问题分为两个子问题,并采用迭代法求解。仿真结果表明,与基准方案相比,所提出的方案可将模型训练时间和无人机能耗分别减少 25% 和 14.4%。
{"title":"Joint Split Offloading and Trajectory Scheduling for UAV-Enabled Mobile Edge Computing in IoT Network","authors":"Yunkai Wei;Zikang Wan;Yinan Xiao;Supeng Leng;Kezhi Wang;Kun Yang","doi":"10.1109/TNSE.2024.3476168","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3476168","url":null,"abstract":"Unmanned Aerial Vehicles (UAV) can provide mobile edge computing (MEC) service for resource-limited devices in Internet of Things (IoT). In such scenario, partial offloading can be used to balance the computing task between the UAV and the IoT devices for higher efficiency. However, traditional partial offloading is not suitable for training deep neural network (DNN), since DNN models cannot be portioned with a continuous ratio. In this paper, we introduce a split offloading scheme, which can flexibly split the DNN training task into two parts based on the DNN layers, and allocate them to the IoT device and UAV respectively. We present a scheme to synchronize the training and communicating period of DNN layers in the UAV and IoT device, and thus reduce the model training time. Based on this scheme, an optimization model is proposed to minimize the UAV energy consumption, which jointly optimizes the UAV trajectory, the DNN split position and the service time scheduling. We divide the problem into two subproblems and solve it with an iterative solution. Simulation results show the proposed scheme can reduce the model training time and the UAV energy consumption by up to 25% and 14.4% compared with benchmark schemes, respectively.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6180-6193"},"PeriodicalIF":6.7,"publicationDate":"2024-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Augmentation With Edge Utility Filter for Signed Graph Neural Networks
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-07 DOI: 10.1109/TNSE.2024.3475379
Ke-Jia Chen;Yaming Ji;Wenhui Mu;Youran Qu
Many real-world networks are signed networks containing positive and negative edges. The existence of negative edges in the signed graph neural network has two consequences. One is the semantic imbalance, as the negative edges are hard to obtain though they may potentially include more useful information. The other is the structural unbalance, e.g., unbalanced triangles, an indication of incompatible relationship among nodes. This paper proposes a balancing augmentation to address the two challenges. Firstly, the utility of each negative edge is determined by calculating its occurrence in balanced structures. Secondly, the original signed graph is selectively augmented with the use of (1) an edge perturbation regulator to balance the number of positive and negative edges and to determine the ratio of perturbed edges and (2) an edge utility filter to remove the negative edges with low utility. Finally, a signed graph neural network is trained on the augmented graph. The theoretical analysis is conducted to prove the effectiveness of each module and the experiments demonstrate that the proposed method can significantly improve the performance of three backbone models in link sign prediction task, with up to 22.8% in the AUC and 19.7% in F1 scores, across five real-world datasets.
{"title":"Balancing Augmentation With Edge Utility Filter for Signed Graph Neural Networks","authors":"Ke-Jia Chen;Yaming Ji;Wenhui Mu;Youran Qu","doi":"10.1109/TNSE.2024.3475379","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3475379","url":null,"abstract":"Many real-world networks are signed networks containing positive and negative edges. The existence of negative edges in the signed graph neural network has two consequences. One is the semantic imbalance, as the negative edges are hard to obtain though they may potentially include more useful information. The other is the structural unbalance, e.g., unbalanced triangles, an indication of incompatible relationship among nodes. This paper proposes a balancing augmentation to address the two challenges. Firstly, the utility of each negative edge is determined by calculating its occurrence in balanced structures. Secondly, the original signed graph is selectively augmented with the use of (1) an edge perturbation regulator to balance the number of positive and negative edges and to determine the ratio of perturbed edges and (2) an edge utility filter to remove the negative edges with low utility. Finally, a signed graph neural network is trained on the augmented graph. The theoretical analysis is conducted to prove the effectiveness of each module and the experiments demonstrate that the proposed method can significantly improve the performance of three backbone models in link sign prediction task, with up to 22.8% in the AUC and 19.7% in F1 scores, across five real-world datasets.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5903-5915"},"PeriodicalIF":6.7,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Bayesian Learning for Sequential Inference of Network Connectivity From Small Data
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-10-03 DOI: 10.1109/TNSE.2024.3471852
Jinming Wan;Jun Kataoka;Jayanth Sivakumar;Eric Peña;Yiming Che;Hiroki Sayama;Changqing Cheng
While significant efforts have been attempted in the design, control, and optimization of complex networks, most existing works assume the network structure is known or readily available. However, the network topology can be radically recast after an adversarial attack and may remain unknown for subsequent analysis. In this work, we propose a novel Bayesian sequential learning approach to reconstruct network connectivity adaptively: A sparse Spike and Slab prior is placed on connectivity for all edges, and the connectivity learned from reconstructed nodes will be used to select the next node and update the prior knowledge. Central to our approach is that most realistic networks are sparse, in that the connectivity degree of each node is much smaller compared to the number of nodes in the network. Sequential selection of the most informative nodes is realized via the between-node expected improvement. We corroborate this sequential Bayesian approach in connectivity recovery for a synthetic ultimatum game network and the IEEE-118 power grid system. Results indicate that only a fraction (∼50%) of the nodes need to be interrogated to reveal the network topology.
{"title":"Sparse Bayesian Learning for Sequential Inference of Network Connectivity From Small Data","authors":"Jinming Wan;Jun Kataoka;Jayanth Sivakumar;Eric Peña;Yiming Che;Hiroki Sayama;Changqing Cheng","doi":"10.1109/TNSE.2024.3471852","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3471852","url":null,"abstract":"While significant efforts have been attempted in the design, control, and optimization of complex networks, most existing works assume the network structure is known or readily available. However, the network topology can be radically recast after an adversarial attack and may remain unknown for subsequent analysis. In this work, we propose a novel Bayesian sequential learning approach to reconstruct network connectivity adaptively: A sparse Spike and Slab prior is placed on connectivity for all edges, and the connectivity learned from reconstructed nodes will be used to select the next node and update the prior knowledge. Central to our approach is that most realistic networks are sparse, in that the connectivity degree of each node is much smaller compared to the number of nodes in the network. Sequential selection of the most informative nodes is realized via the between-node expected improvement. We corroborate this sequential Bayesian approach in connectivity recovery for a synthetic ultimatum game network and the IEEE-118 power grid system. Results indicate that only a fraction (∼50%) of the nodes need to be interrogated to reveal the network topology.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5892-5902"},"PeriodicalIF":6.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142694660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ContractGNN: Ethereum Smart Contract Vulnerability Detection Based on Vulnerability Sub-Graphs and Graph Neural Networks ContractGNN:基于漏洞子图和图神经网络的以太坊智能合约漏洞检测
IF 6.7 2区 计算机科学 Q1 ENGINEERING, MULTIDISCIPLINARY Pub Date : 2024-09-30 DOI: 10.1109/TNSE.2024.3470788
Yichen Wang;Xiangfu Zhao;Long He;Zixian Zhen;Haiyue Chen
Smart contracts have been widely used for their capability of giving blockchain a user-defined logic. In recent years, several smart contract security incidents have resulted in enormous financial losses. Therefore, it is important to detect vulnerabilities in smart contracts before deployment. Machine learning has been used recently in smart contract vulnerability detection. Unfortunately, due to the loss of information during feature extraction, the detection results are unsatisfactory. Hence, we propose a novel approach called ContractGNN, which combines a new concept of a vulnerability sub-graph (VSG) with graph neural networks (GNNs). Compared with traditional methods, checking a VSG is more accurate because the VSG removes irrelevant vertexes in the control flow graph. Furthermore, a VSG can be aggregated and simplified, thus improving the efficiency of message passing in a GNN. Based on aggregated VSGs, we design a new feature extraction method that preserves semantic information, the order of opcode, and control flows of smart contracts. Moreover, we compare a large number of GNN classification models and select the best one to implement ContractGNN. We then test ContractGNN on 48,493 real-world smart contracts, and the experimental results show that ContractGNN outperforms other smart contract vulnerability detection tools, with an average F1 score of 89.70%.
智能合约因其赋予区块链用户定义逻辑的能力而被广泛使用。近年来,几起智能合约安全事件造成了巨大的经济损失。因此,在部署前检测智能合约的漏洞非常重要。最近,机器学习被用于智能合约漏洞检测。遗憾的是,由于特征提取过程中的信息丢失,检测结果并不理想。因此,我们提出了一种名为 ContractGNN 的新方法,它将漏洞子图(VSG)的新概念与图神经网络(GNN)相结合。与传统方法相比,检查 VSG 更为准确,因为 VSG 会删除控制流图中的无关顶点。此外,VSG 还可以聚合和简化,从而提高 GNN 中信息传递的效率。基于聚合的 VSG,我们设计了一种新的特征提取方法,它能保留智能合约的语义信息、操作码顺序和控制流。此外,我们还比较了大量 GNN 分类模型,并选择了最佳模型来实现 ContractGNN。实验结果表明,ContractGNN 的性能优于其他智能合约漏洞检测工具,平均 F1 得分为 89.70%。
{"title":"ContractGNN: Ethereum Smart Contract Vulnerability Detection Based on Vulnerability Sub-Graphs and Graph Neural Networks","authors":"Yichen Wang;Xiangfu Zhao;Long He;Zixian Zhen;Haiyue Chen","doi":"10.1109/TNSE.2024.3470788","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3470788","url":null,"abstract":"Smart contracts have been widely used for their capability of giving blockchain a user-defined logic. In recent years, several smart contract security incidents have resulted in enormous financial losses. Therefore, it is important to detect vulnerabilities in smart contracts before deployment. Machine learning has been used recently in smart contract vulnerability detection. Unfortunately, due to the loss of information during feature extraction, the detection results are unsatisfactory. Hence, we propose a novel approach called ContractGNN, which combines a new concept of a \u0000<italic>vulnerability sub-graph</i>\u0000 (VSG) with \u0000<italic>graph neural networks</i>\u0000 (GNNs). Compared with traditional methods, checking a VSG is more accurate because the VSG removes irrelevant vertexes in the control flow graph. Furthermore, a VSG can be aggregated and simplified, thus improving the efficiency of message passing in a GNN. Based on aggregated VSGs, we design a new feature extraction method that preserves semantic information, the order of opcode, and control flows of smart contracts. Moreover, we compare a large number of GNN classification models and select the best one to implement ContractGNN. We then test ContractGNN on 48,493 real-world smart contracts, and the experimental results show that ContractGNN outperforms other smart contract vulnerability detection tools, with an average F1 score of 89.70%.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6382-6395"},"PeriodicalIF":6.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Network Science and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1