Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks

IF 6.3 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Open Journal of the Communications Society Pub Date : 2025-01-23 DOI:10.1109/OJCOMS.2025.3533093
Khaled A. Alaghbari;Heng-Siong Lim;Charilaos C. Zarakovitis;N. M. Abdul Latiff;Sharifah Hafizah Syed Ariffin;Su Fong Chien
{"title":"Joint Distributed Computation Offloading and Radio Resource Slicing Based on Reinforcement Learning in Vehicular Networks","authors":"Khaled A. Alaghbari;Heng-Siong Lim;Charilaos C. Zarakovitis;N. M. Abdul Latiff;Sharifah Hafizah Syed Ariffin;Su Fong Chien","doi":"10.1109/OJCOMS.2025.3533093","DOIUrl":null,"url":null,"abstract":"Computation offloading in Internet of Vehicles (IoV) networks is a promising technology for transferring computation-intensive and latency-sensitive tasks to mobile-edge computing (MEC) or cloud servers. Privacy is an important concern in vehicular networks, as centralized system can compromise it by sharing raw data from MEC servers with cloud servers. A distributed system offers a more attractive solution, allowing each MEC server to process data locally and make offloading decisions without sharing sensitive information. However, without a mechanism to control its load, the cloud server’s computation capacity can become overloaded. In this study, we propose distributed computation offloading systems using reinforcement learning, such as Q-learning, to optimize offloading decisions and balance computation load across the network while minimizing the number of task offloading switches. We introduce both fixed and adaptive low-complexity mechanisms to allocate resources of the cloud server, formulating the reward function of the Q-learning method to achieve efficient offloading decisions. The proposed adaptive approach enables cooperative utilization of cloud resources by multiple agents. A joint optimization framework is established to maximize overall communication and computing resource utilization, where task offloading is performed on a small-time scale at local edge servers, while radio resource slicing is adjusted on a larger time scale at the cloud server. Simulation results using real vehicle tracing datasets demonstrate the effectiveness of the proposed distributed systems in achieving lower computation load costs, offloading switching costs, and reduce latency while increasing cloud server utilization compared to centralized systems.","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":"6 ","pages":"1231-1245"},"PeriodicalIF":6.3000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10851385","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10851385/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Computation offloading in Internet of Vehicles (IoV) networks is a promising technology for transferring computation-intensive and latency-sensitive tasks to mobile-edge computing (MEC) or cloud servers. Privacy is an important concern in vehicular networks, as centralized system can compromise it by sharing raw data from MEC servers with cloud servers. A distributed system offers a more attractive solution, allowing each MEC server to process data locally and make offloading decisions without sharing sensitive information. However, without a mechanism to control its load, the cloud server’s computation capacity can become overloaded. In this study, we propose distributed computation offloading systems using reinforcement learning, such as Q-learning, to optimize offloading decisions and balance computation load across the network while minimizing the number of task offloading switches. We introduce both fixed and adaptive low-complexity mechanisms to allocate resources of the cloud server, formulating the reward function of the Q-learning method to achieve efficient offloading decisions. The proposed adaptive approach enables cooperative utilization of cloud resources by multiple agents. A joint optimization framework is established to maximize overall communication and computing resource utilization, where task offloading is performed on a small-time scale at local edge servers, while radio resource slicing is adjusted on a larger time scale at the cloud server. Simulation results using real vehicle tracing datasets demonstrate the effectiveness of the proposed distributed systems in achieving lower computation load costs, offloading switching costs, and reduce latency while increasing cloud server utilization compared to centralized systems.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
13.70
自引率
3.80%
发文量
94
审稿时长
10 weeks
期刊介绍: The IEEE Open Journal of the Communications Society (OJ-COMS) is an open access, all-electronic journal that publishes original high-quality manuscripts on advances in the state of the art of telecommunications systems and networks. The papers in IEEE OJ-COMS are included in Scopus. Submissions reporting new theoretical findings (including novel methods, concepts, and studies) and practical contributions (including experiments and development of prototypes) are welcome. Additionally, survey and tutorial articles are considered. The IEEE OJCOMS received its debut impact factor of 7.9 according to the Journal Citation Reports (JCR) 2023. The IEEE Open Journal of the Communications Society covers science, technology, applications and standards for information organization, collection and transfer using electronic, optical and wireless channels and networks. Some specific areas covered include: Systems and network architecture, control and management Protocols, software, and middleware Quality of service, reliability, and security Modulation, detection, coding, and signaling Switching and routing Mobile and portable communications Terminals and other end-user devices Networks for content distribution and distributed computing Communications-based distributed resources control.
期刊最新文献
Efficient Symbol Detection for Holographic MIMO Communications With Unitary Approximate Message Passing Variable-Rate Incremental-Redundancy HARQ for Finite Blocklengths The Role of Digital Twin in 6G-Based URLLCs: Current Contributions, Research Challenges, and Next Directions Trustworthy Reputation for Federated Learning in O-RAN Using Blockchain and Smart Contracts Efficient Spatial Channel Estimation in Extremely Large Antenna Array Communication Systems: A Subspace Approximated Matrix Completion Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1