Bishoy Salama Attia;Aamen Elgharably;Mariam Nabil Aboelwafa;Ghada Alsuhli;Karim Banawan;Karim G. Seddik
{"title":"用于负载平衡和能源效率的自优化代理:具有混合行动空间的强化学习框架","authors":"Bishoy Salama Attia;Aamen Elgharably;Mariam Nabil Aboelwafa;Ghada Alsuhli;Karim Banawan;Karim G. Seddik","doi":"10.1109/OJCOMS.2024.3429284","DOIUrl":null,"url":null,"abstract":"We consider the problem of jointly enhancing the network throughput, minimizing energy consumption, and improving the network coverage of mobile networks. The problem is cast as a reinforcement learning (RL) problem. The reward function accounts for the joint optimization of throughput, energy consumption, and coverage (through the number of uncovered users); our RL framework allows the network operator to assign weights to each of these cost functions based on the operator’s preferences. Moreover, the state is defined by key performance indicators (KPIs) that are readily available on the network operator side. Finally, the action space for the RL agent comprises a hybrid action space, where we have two continuous action elements, namely, cell individual offsets (CIOs) and transmission powers, and one discrete action element, which is switching MIMO ON and OFF. To that end, we propose a new layered RL agent structure to account for the agent hybrid space. We test our proposed RL agent over two scenarios: a simple (proof of concept) scenario and a realistic network scenario. Our results show significant performance gains of the proposed RL agent compared to baseline approaches, such as systems without optimization or RL agents that optimize only one or two parameters.”","PeriodicalId":33803,"journal":{"name":"IEEE Open Journal of the Communications Society","volume":null,"pages":null},"PeriodicalIF":6.3000,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10599522","citationCount":"0","resultStr":"{\"title\":\"Self-Optimized Agent for Load Balancing and Energy Efficiency: A Reinforcement Learning Framework With Hybrid Action Space\",\"authors\":\"Bishoy Salama Attia;Aamen Elgharably;Mariam Nabil Aboelwafa;Ghada Alsuhli;Karim Banawan;Karim G. Seddik\",\"doi\":\"10.1109/OJCOMS.2024.3429284\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We consider the problem of jointly enhancing the network throughput, minimizing energy consumption, and improving the network coverage of mobile networks. The problem is cast as a reinforcement learning (RL) problem. The reward function accounts for the joint optimization of throughput, energy consumption, and coverage (through the number of uncovered users); our RL framework allows the network operator to assign weights to each of these cost functions based on the operator’s preferences. Moreover, the state is defined by key performance indicators (KPIs) that are readily available on the network operator side. Finally, the action space for the RL agent comprises a hybrid action space, where we have two continuous action elements, namely, cell individual offsets (CIOs) and transmission powers, and one discrete action element, which is switching MIMO ON and OFF. To that end, we propose a new layered RL agent structure to account for the agent hybrid space. We test our proposed RL agent over two scenarios: a simple (proof of concept) scenario and a realistic network scenario. Our results show significant performance gains of the proposed RL agent compared to baseline approaches, such as systems without optimization or RL agents that optimize only one or two parameters.”\",\"PeriodicalId\":33803,\"journal\":{\"name\":\"IEEE Open Journal of the Communications Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2024-07-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10599522\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Open Journal of the Communications Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10599522/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Open Journal of the Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10599522/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Self-Optimized Agent for Load Balancing and Energy Efficiency: A Reinforcement Learning Framework With Hybrid Action Space
We consider the problem of jointly enhancing the network throughput, minimizing energy consumption, and improving the network coverage of mobile networks. The problem is cast as a reinforcement learning (RL) problem. The reward function accounts for the joint optimization of throughput, energy consumption, and coverage (through the number of uncovered users); our RL framework allows the network operator to assign weights to each of these cost functions based on the operator’s preferences. Moreover, the state is defined by key performance indicators (KPIs) that are readily available on the network operator side. Finally, the action space for the RL agent comprises a hybrid action space, where we have two continuous action elements, namely, cell individual offsets (CIOs) and transmission powers, and one discrete action element, which is switching MIMO ON and OFF. To that end, we propose a new layered RL agent structure to account for the agent hybrid space. We test our proposed RL agent over two scenarios: a simple (proof of concept) scenario and a realistic network scenario. Our results show significant performance gains of the proposed RL agent compared to baseline approaches, such as systems without optimization or RL agents that optimize only one or two parameters.”
期刊介绍:
The IEEE Open Journal of the Communications Society (OJ-COMS) is an open access, all-electronic journal that publishes original high-quality manuscripts on advances in the state of the art of telecommunications systems and networks. The papers in IEEE OJ-COMS are included in Scopus. Submissions reporting new theoretical findings (including novel methods, concepts, and studies) and practical contributions (including experiments and development of prototypes) are welcome. Additionally, survey and tutorial articles are considered. The IEEE OJCOMS received its debut impact factor of 7.9 according to the Journal Citation Reports (JCR) 2023.
The IEEE Open Journal of the Communications Society covers science, technology, applications and standards for information organization, collection and transfer using electronic, optical and wireless channels and networks. Some specific areas covered include:
Systems and network architecture, control and management
Protocols, software, and middleware
Quality of service, reliability, and security
Modulation, detection, coding, and signaling
Switching and routing
Mobile and portable communications
Terminals and other end-user devices
Networks for content distribution and distributed computing
Communications-based distributed resources control.