{"title":"Guest Editorial: Introduction to the Special Section on Research on Power Technology, Economy and Policy Towards Net-Zero Emissions","authors":"Junhua Zhao;Jing Qiu;Fushuan Wen;Junbo Zhao;Ciwei Gao;Yue Zhou","doi":"10.1109/TNSE.2024.3478396","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3478396","url":null,"abstract":"","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5394-5395"},"PeriodicalIF":6.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758620","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1109/TNSE.2024.3483408
Yang Yang;Chen Chen;Rose Qingyang Hu;Schahram Dustdar;Qingqi Pei
{"title":"Guest Editorial: Introduction to the Special Section on Aerial Computing Networks in 6G","authors":"Yang Yang;Chen Chen;Rose Qingyang Hu;Schahram Dustdar;Qingqi Pei","doi":"10.1109/TNSE.2024.3483408","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3483408","url":null,"abstract":"","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5130-5134"},"PeriodicalIF":6.7,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10758418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1109/TNSE.2024.3482365
Abdul Manan;Syed Maaz Shahid;SungKyung Kim;Sungoh Kwon
In heterogeneous networks (HetNets), high user density and random small cell deployment often result in uneven User Equipment (UE) distributions among cells. This can lead to excessive resource usage in some cells and a degradation of Quality of Service (QoS) for users, even while resources in other cells remain underutilized. To address this challenge, we propose a load-balancing algorithm for 5G HetNets that employs traffic splitting for dual connectivity (DC) users. By enabling traffic splitting, DC allows UEs to receive data from both macro and small cells, thereby enhancing network performance in terms of load balancing and QoS improvement. To prevent cell overloading, we formulate the problem of minimizing load variance across 5G HetNet cells using traffic splitting. We derive a theoretical expression to determine the optimal split ratio by considering the cell load conditions. The proposed algorithm dynamically adjusts the data traffic split for DC users based on the optimal split ratio and, if necessary, offloads edge users from overloaded macro cells to underloaded macro cells to achieve uniform network load distribution. Simulation results demonstrate that the proposed algorithm achieves more even load distribution than other load balancing algorithms and increases network throughput and the number of QoS-satisfied users.
{"title":"Load Balancing With Traffic Splitting for QoS Enhancement in 5G HetNets","authors":"Abdul Manan;Syed Maaz Shahid;SungKyung Kim;Sungoh Kwon","doi":"10.1109/TNSE.2024.3482365","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3482365","url":null,"abstract":"In heterogeneous networks (HetNets), high user density and random small cell deployment often result in uneven User Equipment (UE) distributions among cells. This can lead to excessive resource usage in some cells and a degradation of Quality of Service (QoS) for users, even while resources in other cells remain underutilized. To address this challenge, we propose a load-balancing algorithm for 5G HetNets that employs traffic splitting for dual connectivity (DC) users. By enabling traffic splitting, DC allows UEs to receive data from both macro and small cells, thereby enhancing network performance in terms of load balancing and QoS improvement. To prevent cell overloading, we formulate the problem of minimizing load variance across 5G HetNet cells using traffic splitting. We derive a theoretical expression to determine the optimal split ratio by considering the cell load conditions. The proposed algorithm dynamically adjusts the data traffic split for DC users based on the optimal split ratio and, if necessary, offloads edge users from overloaded macro cells to underloaded macro cells to achieve uniform network load distribution. Simulation results demonstrate that the proposed algorithm achieves more even load distribution than other load balancing algorithms and increases network throughput and the number of QoS-satisfied users.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6272-6284"},"PeriodicalIF":6.7,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1109/TNSE.2024.3473941
Juan Shi;Chen Liu;Jinzhuo Liu
Modeling the learning dynamic of multi-agent systems has long been a crucial issue for understanding the emergence of collective behavior. In public goods games, agents interact in multiple larger groups. While previous studies have primarily focused on infinite populations that only allow pairwise interactions, we aim to investigate the learning dynamics of agents in a public goods game with higher-order interactions. With a novel use of hypergraphs for encoding higher-order interactions, we develop a formal model (a Fokker-Planck equation) to describe the temporal evolution of the distribution function of Q-values. Noting that early research focused on replicator models to predict system dynamics failed to accurately capture the impact of hyperdegree in hypergraphs, our model effectively maps its influence. Through experiments, we demonstrate that our theoretical findings are consistent with the agent-based simulation results. We demonstrated that as the number of groups an agent is involved in reaches a certain scale, the learning dynamics of the system evolve to resemble those of a well-mixed population. Furthermore, we demonstrate that our model offers insights into algorithmic parameters, such as the Boltzmann temperature, facilitating parameter tuning.
{"title":"Hypergraph-Based Model for Modeling Multi-Agent Q-Learning Dynamics in Public Goods Games","authors":"Juan Shi;Chen Liu;Jinzhuo Liu","doi":"10.1109/TNSE.2024.3473941","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3473941","url":null,"abstract":"Modeling the learning dynamic of multi-agent systems has long been a crucial issue for understanding the emergence of collective behavior. In public goods games, agents interact in multiple larger groups. While previous studies have primarily focused on infinite populations that only allow pairwise interactions, we aim to investigate the learning dynamics of agents in a public goods game with higher-order interactions. With a novel use of hypergraphs for encoding higher-order interactions, we develop a formal model (a Fokker-Planck equation) to describe the temporal evolution of the distribution function of Q-values. Noting that early research focused on replicator models to predict system dynamics failed to accurately capture the impact of hyperdegree in hypergraphs, our model effectively maps its influence. Through experiments, we demonstrate that our theoretical findings are consistent with the agent-based simulation results. We demonstrated that as the number of groups an agent is involved in reaches a certain scale, the learning dynamics of the system evolve to resemble those of a well-mixed population. Furthermore, we demonstrate that our model offers insights into algorithmic parameters, such as the Boltzmann temperature, facilitating parameter tuning.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6169-6179"},"PeriodicalIF":6.7,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1109/TNSE.2024.3479770
Sheng Yi;Hao Zhang;Kai Liu
Collaborative perception (CP) with vehicle-to-infrastructure (V2I) communications is a critical scenario in high-level autonomous driving. This paper presents a novel framework called V2IViewer to facilitate collaborative perception, which consists of three modules: object detection and tracking, data transmission, and object alignment. On this basis, we design a heterogeneous multi-agent middle layer (HMML) as the backbone to extract feature representations, and utilize a Kalman filter (KF) with the Hungarian algorithm for object tracking. For transmitting object information from infrastructure to ego-vehicle, Protobuf is utilized for data serialization using binary encoding, which reduces communication overheads. For object alignment from multiple agents, a Spatiotemporal Asynchronous Fusion (SAF) method is proposed, which uses a Multilayer Perceptron (MLP) for generating post-synchronization object sequences. These sequences are then utilized for fusion to enhance the accuracy of the integration. Experimental validation on DAIR-V2X-C, V2X-Seq, and V2XSet datasets shows that V2IViewer enhances long-range object detection accuracy by an average of 12.9% over state-of-the-art collaborative methods. Moreover, V2IViewer demonstrates an average improvement in accuracy of 3.3% across various noise conditions compared to existing models. Finally, the system prototype is implemented and the performance has been validated in realistic environments.
{"title":"V2IViewer: Towards Efficient Collaborative Perception via Point Cloud Data Fusion and Vehicle-to-Infrastructure Communications","authors":"Sheng Yi;Hao Zhang;Kai Liu","doi":"10.1109/TNSE.2024.3479770","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3479770","url":null,"abstract":"Collaborative perception (CP) with vehicle-to-infrastructure (V2I) communications is a critical scenario in high-level autonomous driving. This paper presents a novel framework called V2IViewer to facilitate collaborative perception, which consists of three modules: object detection and tracking, data transmission, and object alignment. On this basis, we design a heterogeneous multi-agent middle layer (HMML) as the backbone to extract feature representations, and utilize a Kalman filter (KF) with the Hungarian algorithm for object tracking. For transmitting object information from infrastructure to ego-vehicle, Protobuf is utilized for data serialization using binary encoding, which reduces communication overheads. For object alignment from multiple agents, a Spatiotemporal Asynchronous Fusion (SAF) method is proposed, which uses a Multilayer Perceptron (MLP) for generating post-synchronization object sequences. These sequences are then utilized for fusion to enhance the accuracy of the integration. Experimental validation on DAIR-V2X-C, V2X-Seq, and V2XSet datasets shows that V2IViewer enhances long-range object detection accuracy by an average of 12.9% over state-of-the-art collaborative methods. Moreover, V2IViewer demonstrates an average improvement in accuracy of 3.3% across various noise conditions compared to existing models. Finally, the system prototype is implemented and the performance has been validated in realistic environments.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6219-6230"},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1109/TNSE.2024.3481413
Du Chen;Weiting Zhang;Deyun Gao;Dong Yang;Hongke Zhang
Multipath TCP (MPTCP) is considered as a solution capable of addressing the growing demand for bandwidth. However, the existing MPTCP mechanisms make flow scheduling based on coarse-grained end-to-end network states, which prevents MPTCP from better aggregating the bandwidth of multiple paths. Besides, link overlapping may occur between different MPTCP connections, which results in multiple subflows competing for bandwidth of the shared link. In this paper, we propose GFlow, a Graph Neural Network (GNN) based Deep Reinforcement Learning (DRL) algorithm, to make optimal flow scheduling for multipath transmission with link overlapping. Specifically, we formulate the flow scheduling problem as a problem of maximizing overall throughput by taking both bottleneck bandwidth and shared bandwidth into consideration. To support accurate network state perception, GFlow utilizes In-band Network Telemetry (INT) to collect real-time and fine-grained network states. Taking these states as input, the DRL agent with GNN integrated fully learns the relationships among links, paths (subflows), and MPTCP connections. In this way, GFlow is able to make optimal flow scheduling decisions according to the network states. We build a P4-based multipath transmission system and carry out extensive experiments to evaluate the performance of GFlow. The results show that GFlow outperforms the baseline multipath transmission mechanism in both homogeneous scenario and heterogeneous scenario, improving the average overallthroughput while reducing the average round trip time (RTT).
{"title":"GFlow: GNN-Based Optimal Flow Scheduling for Multipath Transmission With Link Overlapping","authors":"Du Chen;Weiting Zhang;Deyun Gao;Dong Yang;Hongke Zhang","doi":"10.1109/TNSE.2024.3481413","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3481413","url":null,"abstract":"Multipath TCP (MPTCP) is considered as a solution capable of addressing the growing demand for bandwidth. However, the existing MPTCP mechanisms make flow scheduling based on coarse-grained end-to-end network states, which prevents MPTCP from better aggregating the bandwidth of multiple paths. Besides, link overlapping may occur between different MPTCP connections, which results in multiple subflows competing for bandwidth of the shared link. In this paper, we propose GFlow, a Graph Neural Network (GNN) based Deep Reinforcement Learning (DRL) algorithm, to make optimal flow scheduling for multipath transmission with link overlapping. Specifically, we formulate the flow scheduling problem as a problem of maximizing overall throughput by taking both bottleneck bandwidth and shared bandwidth into consideration. To support accurate network state perception, GFlow utilizes In-band Network Telemetry (INT) to collect real-time and fine-grained network states. Taking these states as input, the DRL agent with GNN integrated fully learns the relationships among links, paths (subflows), and MPTCP connections. In this way, GFlow is able to make optimal flow scheduling decisions according to the network states. We build a P4-based multipath transmission system and carry out extensive experiments to evaluate the performance of GFlow. The results show that GFlow outperforms the baseline multipath transmission mechanism in both homogeneous scenario and heterogeneous scenario, improving the average overallthroughput while reducing the average round trip time (RTT).","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6244-6258"},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low earth orbit (LEO) satellite networks have emerged as a promising field for distributed Internet of Things (IoT) devices, particularly in latency-tolerant applications. Federated learning (FL) is implemented in LEO satellite IoT networks to preserve data privacy and facilitate machine learning (ML). However, the user who spends the longest time significantly hampers FL efficiency and degrades the Quality-of-Service (QoS), potentially leading to irreparable damage. To address this challenge, we propose a joint data allocation and server selection strategy based on long short-term memory (LSTM) with parallelized FL in LEO satellite IoT networks. Herein, data-parallel learning is utilized, allowing multiple users to collaboratively train ML networks to minimize latency. Moreover, server selection takes into account signal propagation delays as well as traffic loads forecasted by an LSTM network, thereby improving the efficiency even further. Specifically, the strategies are formulated as optimization problems and tackled using a line search sequential quadratic programming (SQP) method and a multiple-objective particle swarm optimization (MOPSO) algorithm. Simulation results show the effectiveness of the proposed strategy in reducing total latency and enhancing the efficiency of FL in LEO satellite IoT networks compared to the alternatives.
低地球轨道(LEO)卫星网络已成为分布式物联网(IoT)设备的一个大有可为的领域,尤其是在耐延迟应用中。在低地轨道卫星物联网网络中实施了联合学习(FL),以保护数据隐私并促进机器学习(ML)。然而,花费时间最长的用户严重影响了联合学习的效率,并降低了服务质量(QoS),可能导致不可挽回的损失。为了应对这一挑战,我们提出了一种基于长短期记忆(LSTM)的联合数据分配和服务器选择策略,并在低地轨道卫星物联网网络中实现并行 FL。在此,我们利用数据并行学习,允许多个用户协作训练 ML 网络,以最大限度地减少延迟。此外,服务器选择考虑了信号传播延迟以及 LSTM 网络预测的流量负载,从而进一步提高了效率。具体来说,这些策略被表述为优化问题,并使用线性搜索顺序二次编程(SQP)方法和多目标粒子群优化(MOPSO)算法加以解决。仿真结果表明,与其他替代方案相比,所提出的策略能有效降低总延迟,并提高低地轨道卫星物联网网络中 FL 的效率。
{"title":"Joint Data Allocation and LSTM-Based Server Selection With Parallelized Federated Learning in LEO Satellite IoT Networks","authors":"Pengxiang Qin;Dongyang Xu;Lei Liu;Mianxiong Dong;Shahid Mumtaz;Mohsen Guizani","doi":"10.1109/TNSE.2024.3481630","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3481630","url":null,"abstract":"Low earth orbit (LEO) satellite networks have emerged as a promising field for distributed Internet of Things (IoT) devices, particularly in latency-tolerant applications. Federated learning (FL) is implemented in LEO satellite IoT networks to preserve data privacy and facilitate machine learning (ML). However, the user who spends the longest time significantly hampers FL efficiency and degrades the Quality-of-Service (QoS), potentially leading to irreparable damage. To address this challenge, we propose a joint data allocation and server selection strategy based on long short-term memory (LSTM) with parallelized FL in LEO satellite IoT networks. Herein, data-parallel learning is utilized, allowing multiple users to collaboratively train ML networks to minimize latency. Moreover, server selection takes into account signal propagation delays as well as traffic loads forecasted by an LSTM network, thereby improving the efficiency even further. Specifically, the strategies are formulated as optimization problems and tackled using a line search sequential quadratic programming (SQP) method and a multiple-objective particle swarm optimization (MOPSO) algorithm. Simulation results show the effectiveness of the proposed strategy in reducing total latency and enhancing the efficiency of FL in LEO satellite IoT networks compared to the alternatives.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6259-6271"},"PeriodicalIF":6.7,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-15DOI: 10.1109/TNSE.2024.3481061
Peng Qin;Jinghan Li;Jing Zhang;Yang Fu
With the proliferation of Internet of Things (IoT), compute-intensive and latency-critical applications continue to emerge. However, IoT devices in isolated locations have insufficient energy storage as well as computing resources and may fall outside the service range of ground communication networks. To overcome the constraints of communication coverage and terminal resource, this paper proposes a multiple Unmanned Aerial Vehicle (UAV)-assisted air-ground collaborative edge computing network model, which comprises associated UAVs, auxiliary UAVs, ground user devices (GDs), and base stations (BSs), intending to minimize the overall system energy consumption. It delves into task offloading, UAV trajectory planning and edge resource allocation, which thus is classified as a Mixed-Integer Nonlinear Programming (MINLP) problem. Worse still, the coupling of long-term task queuing delay and short-term offloading decision makes it challenging to address the original issue directly. Therefore, we employ Lyapunov optimization to transform it into two sub-problems. The first involves task offloading for GDs, trajectory optimization for associated UAVs as well as auxiliary UAVs, which is tackled using Deep Reinforcement Learning (DRL), while the second deals with task partitioning and computing resource allocation, which we address via convex optimization. Through numerical simulations, we verify that the proposed approach outperforms other benchmark methods regarding overall system energy consumption.
{"title":"Joint Task Allocation and Trajectory Optimization for Multi-UAV Collaborative Air–Ground Edge Computing","authors":"Peng Qin;Jinghan Li;Jing Zhang;Yang Fu","doi":"10.1109/TNSE.2024.3481061","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3481061","url":null,"abstract":"With the proliferation of Internet of Things (IoT), compute-intensive and latency-critical applications continue to emerge. However, IoT devices in isolated locations have insufficient energy storage as well as computing resources and may fall outside the service range of ground communication networks. To overcome the constraints of communication coverage and terminal resource, this paper proposes a multiple Unmanned Aerial Vehicle (UAV)-assisted air-ground collaborative edge computing network model, which comprises associated UAVs, auxiliary UAVs, ground user devices (GDs), and base stations (BSs), intending to minimize the overall system energy consumption. It delves into task offloading, UAV trajectory planning and edge resource allocation, which thus is classified as a Mixed-Integer Nonlinear Programming (MINLP) problem. Worse still, the coupling of long-term task queuing delay and short-term offloading decision makes it challenging to address the original issue directly. Therefore, we employ Lyapunov optimization to transform it into two sub-problems. The first involves task offloading for GDs, trajectory optimization for associated UAVs as well as auxiliary UAVs, which is tackled using Deep Reinforcement Learning (DRL), while the second deals with task partitioning and computing resource allocation, which we address via convex optimization. Through numerical simulations, we verify that the proposed approach outperforms other benchmark methods regarding overall system energy consumption.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6231-6243"},"PeriodicalIF":6.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, owing to the communication, computation, storage, networking, and sensing abilities, the swarm of unmanned aerial vehicles (UAV) is highly anticipated to be helpful for emergency, disaster, and military situations. Additionally, in such situations, each UAV generates local sensing data with its cameras and sensors. Data sharing in UAV swarm is an urgent need for both users and administrators. For users, they may want to access data stored on any specific UAV on demand. For administrators, they need to construct global information and situational awareness to enable many cooperative applications. This paper makes the first step to tackling this open problem with an efficient data-sharing framework called Frisbee. It first groups all UAVs as a series of cells, each of which has a head-UAV. Inside any cell, all UAVs can communicate with each other directly. Thus, for the intra-cell sharing, Frisbee designs the Dynamic Cuckoo Summary for the head-UAV to accurately index all data inside the cell. For inter-cell sharing, Frisbee designs an effective method to map both the data indices and the head-UAV into a 2-dimensional virtual plane. Based on such virtual plane, a head-UAV communication graph is formed according to the communication range of each head for both data localization and transmission. The comprehensive experiments show that Frisbee achieves 14.7% higher insert throughput, 39.1% lower response delay, and 41.4% less implementation overhead, respectively, compared to the most involved solutions of the ground network.
{"title":"Frisbee: An Efficient Data Sharing Framework for UAV Swarms","authors":"Peipei Chen;Lailong Luo;Deke Guo;Qianzhen Zhang;Xueshan Luo;Bangbang Ren;Yulong Shen","doi":"10.1109/TNSE.2024.3479695","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3479695","url":null,"abstract":"Nowadays, owing to the communication, computation, storage, networking, and sensing abilities, the swarm of unmanned aerial vehicles (UAV) is highly anticipated to be helpful for emergency, disaster, and military situations. Additionally, in such situations, each UAV generates local sensing data with its cameras and sensors. Data sharing in UAV swarm is an urgent need for both users and administrators. For users, they may want to access data stored on any specific UAV on demand. For administrators, they need to construct global information and situational awareness to enable many cooperative applications. This paper makes the first step to tackling this open problem with an efficient data-sharing framework called Frisbee. It first groups all UAVs as a series of cells, each of which has a head-UAV. Inside any cell, all UAVs can communicate with each other directly. Thus, for the intra-cell sharing, Frisbee designs the Dynamic Cuckoo Summary for the head-UAV to accurately index all data inside the cell. For inter-cell sharing, Frisbee designs an effective method to map both the data indices and the head-UAV into a 2-dimensional virtual plane. Based on such virtual plane, a head-UAV communication graph is formed according to the communication range of each head for both data localization and transmission. The comprehensive experiments show that Frisbee achieves 14.7% higher insert throughput, 39.1% lower response delay, and 41.4% less implementation overhead, respectively, compared to the most involved solutions of the ground network.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"5380-5393"},"PeriodicalIF":6.7,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-09DOI: 10.1109/TNSE.2024.3476968
Changzhen Zhang;Jun Yang
Satellite edge computing (SEC) can offer task computing services to ground users, particularly in areas lacking terrestrial network coverage. Nevertheless, given the limited energy of low earth orbit (LEO) satellites, they cannot be used to process numerous computational tasks. Furthermore, most existing task offloading methods are designed for homogeneous tasks, which obviously cannot meet service requirements of various computational tasks. In this work, we investigate energy-efficient collaborative offloading scheme with heterogeneous tasks for SEC to save energy and improve efficiency. Firstly, by dividing computational tasks into delay-sensitive (DS) and delay-tolerant (DT) tasks, we propose a collaborative service architecture with ground edge, satellite edge, and cloud, where specific task offloading schemes are given for both sparse and dense user scenarios to reduce the energy consumption of LEO satellites. Secondly, to reduce the delay and failure rate of DS tasks, we propose an access threshold strategy for DS tasks to control the queue length and facilitate load balancing among multiple computing platforms. Thirdly, to evaluate the proposed offloading scheme, we develop the continuous-time Markov chain (CTMC) to model the traffic load on computing platforms, and the stationary distribution is solved employing the matrix-geometric method. Finally, numerical results for SEC are presented to validate the effectiveness of the proposed offloading scheme.
{"title":"An Energy-Efficient Collaborative Offloading Scheme With Heterogeneous Tasks for Satellite Edge Computing","authors":"Changzhen Zhang;Jun Yang","doi":"10.1109/TNSE.2024.3476968","DOIUrl":"https://doi.org/10.1109/TNSE.2024.3476968","url":null,"abstract":"Satellite edge computing (SEC) can offer task computing services to ground users, particularly in areas lacking terrestrial network coverage. Nevertheless, given the limited energy of low earth orbit (LEO) satellites, they cannot be used to process numerous computational tasks. Furthermore, most existing task offloading methods are designed for homogeneous tasks, which obviously cannot meet service requirements of various computational tasks. In this work, we investigate energy-efficient collaborative offloading scheme with heterogeneous tasks for SEC to save energy and improve efficiency. Firstly, by dividing computational tasks into delay-sensitive (DS) and delay-tolerant (DT) tasks, we propose a collaborative service architecture with ground edge, satellite edge, and cloud, where specific task offloading schemes are given for both sparse and dense user scenarios to reduce the energy consumption of LEO satellites. Secondly, to reduce the delay and failure rate of DS tasks, we propose an access threshold strategy for DS tasks to control the queue length and facilitate load balancing among multiple computing platforms. Thirdly, to evaluate the proposed offloading scheme, we develop the continuous-time Markov chain (CTMC) to model the traffic load on computing platforms, and the stationary distribution is solved employing the matrix-geometric method. Finally, numerical results for SEC are presented to validate the effectiveness of the proposed offloading scheme.","PeriodicalId":54229,"journal":{"name":"IEEE Transactions on Network Science and Engineering","volume":"11 6","pages":"6396-6407"},"PeriodicalIF":6.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142679386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}