基于 DRL 的多代理能量收集,提高无人机辅助无线传感器网络的数据新鲜度

IF 4.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Network and Service Management Pub Date : 2024-09-04 DOI:10.1109/TNSM.2024.3454217
Mesfin Leranso Betalo;Supeng Leng;Hayla Nahom Abishu;Abegaz Mohammed Seid;Maged Fakirah;Aiman Erbad;Mohsen Guizani
{"title":"基于 DRL 的多代理能量收集,提高无人机辅助无线传感器网络的数据新鲜度","authors":"Mesfin Leranso Betalo;Supeng Leng;Hayla Nahom Abishu;Abegaz Mohammed Seid;Maged Fakirah;Aiman Erbad;Mohsen Guizani","doi":"10.1109/TNSM.2024.3454217","DOIUrl":null,"url":null,"abstract":"In sixth-generation (6G) networks, unmanned aerial vehicles (UAVs) are expected to be widely used as aerial base stations (ABS) due to their adaptability, low deployment costs, and ultra-low latency responses. However, UAVs consume large amounts of power to collect data from multiple sensor nodes (SNs). This can limit their flight time and transmission efficiency, resulting in delays and low information freshness. In this paper, we present a multi-access edge computing (MEC)-integrated UAV-assisted wireless sensor network (WSN) with a laser technology-based energy harvesting (EH) system that makes the UAV act as a flying energy charger to address these issues. This work aims to minimize the age of information (AoI) and improve energy efficiency by jointly optimizing the UAV trajectories, EH, task scheduling, and data offloading. The joint optimization problem is formulated as a Markov decision process (MDP) and then transformed into a stochastic game model to handle the complexity and dynamics of the environment. We adopt a multi-agent deep Q-network (MADQN) algorithm to solve the formulated optimization problem. With the MADQN algorithm, UAVs can determine the best data collection and EH decisions to minimize their energy consumption and efficiently collect data from multiple SNs, leading to reduced AoI and improved energy efficiency. Compared to the benchmark algorithms such as deep deterministic policy gradient (DDPG), Dueling DQN, asynchronous advantage actor-critic (A3C) and Greedy, the MADQN algorithm has a lower average AoI and improves energy efficiency by 95.5%, 89.9%, 78.02% and 65.52% respectively.","PeriodicalId":13423,"journal":{"name":"IEEE Transactions on Network and Service Management","volume":"21 6","pages":"6527-6541"},"PeriodicalIF":4.7000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Agent DRL-Based Energy Harvesting for Freshness of Data in UAV-Assisted Wireless Sensor Networks\",\"authors\":\"Mesfin Leranso Betalo;Supeng Leng;Hayla Nahom Abishu;Abegaz Mohammed Seid;Maged Fakirah;Aiman Erbad;Mohsen Guizani\",\"doi\":\"10.1109/TNSM.2024.3454217\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In sixth-generation (6G) networks, unmanned aerial vehicles (UAVs) are expected to be widely used as aerial base stations (ABS) due to their adaptability, low deployment costs, and ultra-low latency responses. However, UAVs consume large amounts of power to collect data from multiple sensor nodes (SNs). This can limit their flight time and transmission efficiency, resulting in delays and low information freshness. In this paper, we present a multi-access edge computing (MEC)-integrated UAV-assisted wireless sensor network (WSN) with a laser technology-based energy harvesting (EH) system that makes the UAV act as a flying energy charger to address these issues. This work aims to minimize the age of information (AoI) and improve energy efficiency by jointly optimizing the UAV trajectories, EH, task scheduling, and data offloading. The joint optimization problem is formulated as a Markov decision process (MDP) and then transformed into a stochastic game model to handle the complexity and dynamics of the environment. We adopt a multi-agent deep Q-network (MADQN) algorithm to solve the formulated optimization problem. With the MADQN algorithm, UAVs can determine the best data collection and EH decisions to minimize their energy consumption and efficiently collect data from multiple SNs, leading to reduced AoI and improved energy efficiency. Compared to the benchmark algorithms such as deep deterministic policy gradient (DDPG), Dueling DQN, asynchronous advantage actor-critic (A3C) and Greedy, the MADQN algorithm has a lower average AoI and improves energy efficiency by 95.5%, 89.9%, 78.02% and 65.52% respectively.\",\"PeriodicalId\":13423,\"journal\":{\"name\":\"IEEE Transactions on Network and Service Management\",\"volume\":\"21 6\",\"pages\":\"6527-6541\"},\"PeriodicalIF\":4.7000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Network and Service Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10664472/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Network and Service Management","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10664472/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在第六代(6G)网络中,无人驾驶飞行器(uav)由于其适应性强、部署成本低、超低延迟响应等优点,有望广泛应用于空中基站(ABS)。然而,无人机从多个传感器节点(SNs)收集数据需要消耗大量的功率。这限制了他们的飞行时间和传输效率,导致延误和信息新鲜度低。在本文中,我们提出了一种集成多接入边缘计算(MEC)的无人机辅助无线传感器网络(WSN),该网络具有基于激光技术的能量收集(EH)系统,使无人机充当飞行能量充电器来解决这些问题。本研究旨在通过联合优化无人机轨迹、EH、任务调度和数据卸载,最大限度地减少信息时代(AoI),提高能源效率。将联合优化问题表述为马尔可夫决策过程(MDP),然后将其转化为随机博弈模型来处理环境的复杂性和动态性。我们采用多智能体深度q -网络(MADQN)算法来解决公式化的优化问题。利用MADQN算法,无人机可以确定最佳的数据收集和EH决策,以最小化其能量消耗,并有效地从多个SNs收集数据,从而降低AoI并提高能源效率。与deep deterministic policy gradient (DDPG)、Dueling DQN、asynchronous advantage actor-critic (A3C)和Greedy等基准算法相比,MADQN算法的平均AoI更低,能效分别提高了95.5%、89.9%、78.02%和65.52%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Multi-Agent DRL-Based Energy Harvesting for Freshness of Data in UAV-Assisted Wireless Sensor Networks
In sixth-generation (6G) networks, unmanned aerial vehicles (UAVs) are expected to be widely used as aerial base stations (ABS) due to their adaptability, low deployment costs, and ultra-low latency responses. However, UAVs consume large amounts of power to collect data from multiple sensor nodes (SNs). This can limit their flight time and transmission efficiency, resulting in delays and low information freshness. In this paper, we present a multi-access edge computing (MEC)-integrated UAV-assisted wireless sensor network (WSN) with a laser technology-based energy harvesting (EH) system that makes the UAV act as a flying energy charger to address these issues. This work aims to minimize the age of information (AoI) and improve energy efficiency by jointly optimizing the UAV trajectories, EH, task scheduling, and data offloading. The joint optimization problem is formulated as a Markov decision process (MDP) and then transformed into a stochastic game model to handle the complexity and dynamics of the environment. We adopt a multi-agent deep Q-network (MADQN) algorithm to solve the formulated optimization problem. With the MADQN algorithm, UAVs can determine the best data collection and EH decisions to minimize their energy consumption and efficiently collect data from multiple SNs, leading to reduced AoI and improved energy efficiency. Compared to the benchmark algorithms such as deep deterministic policy gradient (DDPG), Dueling DQN, asynchronous advantage actor-critic (A3C) and Greedy, the MADQN algorithm has a lower average AoI and improves energy efficiency by 95.5%, 89.9%, 78.02% and 65.52% respectively.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Network and Service Management
IEEE Transactions on Network and Service Management Computer Science-Computer Networks and Communications
CiteScore
9.30
自引率
15.10%
发文量
325
期刊介绍: IEEE Transactions on Network and Service Management will publish (online only) peerreviewed archival quality papers that advance the state-of-the-art and practical applications of network and service management. Theoretical research contributions (presenting new concepts and techniques) and applied contributions (reporting on experiences and experiments with actual systems) will be encouraged. These transactions will focus on the key technical issues related to: Management Models, Architectures and Frameworks; Service Provisioning, Reliability and Quality Assurance; Management Functions; Enabling Technologies; Information and Communication Models; Policies; Applications and Case Studies; Emerging Technologies and Standards.
期刊最新文献
Table of Contents Table of Contents Guest Editors’ Introduction: Special Issue on Robust and Resilient Future Communication Networks A Novel Adaptive Device-Free Passive Indoor Fingerprinting Localization Under Dynamic Environment HSS: A Memory-Efficient, Accurate, and Fast Network Measurement Framework in Sliding Windows
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1