Dhanashree Kulkarni, Mithra Venkatesan, Anju V. Kulkarni
{"title":"基于行为批判的强化学习,实现 5G RAN 切片中的联合资源分配和吞吐量最大化","authors":"Dhanashree Kulkarni, Mithra Venkatesan, Anju V. Kulkarni","doi":"10.1007/s11277-024-11526-0","DOIUrl":null,"url":null,"abstract":"<p>With the advent of fifth generation (5G) mobile communication network slicing technology, the range of application scenarios is expanding significantly. For 5G to function well, it necessitates little delay, a fast rate of data transfer, and the ability to handle a large number of connections. This demanding service requires the allocation of resources in a dynamic manner, while maintaining a very high level of reliability in terms of Quality of Service (QoS).The applications like autonomous driving, telesurgery, etc. have stringent QoS demands and the present design of slices is not suitable for these services. Therefore, latency has been regarded as a crucial factor in the design of the slices. Conventional optimization algorithms often lack robustness and adaptability to dynamic environments, getting stuck in local optima and failing to generalize to varying conditions. Our solution utilizes Reinforcement Learning (RL) to allocate resources to the slices. The utilization of restricted resources can be optimized through the reconfiguration of slices. The ability of RL to acquire knowledge from the surroundings enables our solution to adjust to varying network conditions, enhance the allocation of resources and improve quality of service over a period of time for different network slices. This study introduces the Deep Actor Critic Reinforcement Learning- Network Slicing (DACRL-NS) technique, which utilizes Deep Actor Critic Reinforcement learning for efficient resource allocation to network slices. The objective is to achieve optimal throughput in the network. If the slices fail to meet the minimum criteria, they will be omitted from the allocation. With increasing training episodes, our Actor-Critic algorithm enhances average cumulative rewards and resource allocation efficiency, demonstrating continuous learning and improved decision-making.The simulated suggested system demonstrates an average throughput improvement of 8.92% and 16.36% with respect to the rate requirement and latency requirement, respectively. The data also demonstrate a 17.14% increase in the overall network throughput.</p>","PeriodicalId":23827,"journal":{"name":"Wireless Personal Communications","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Actor Critic Based Reinforcement Learning for Joint Resource Allocation and Throughput Maximization in 5G RAN Slicing\",\"authors\":\"Dhanashree Kulkarni, Mithra Venkatesan, Anju V. Kulkarni\",\"doi\":\"10.1007/s11277-024-11526-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the advent of fifth generation (5G) mobile communication network slicing technology, the range of application scenarios is expanding significantly. For 5G to function well, it necessitates little delay, a fast rate of data transfer, and the ability to handle a large number of connections. This demanding service requires the allocation of resources in a dynamic manner, while maintaining a very high level of reliability in terms of Quality of Service (QoS).The applications like autonomous driving, telesurgery, etc. have stringent QoS demands and the present design of slices is not suitable for these services. Therefore, latency has been regarded as a crucial factor in the design of the slices. Conventional optimization algorithms often lack robustness and adaptability to dynamic environments, getting stuck in local optima and failing to generalize to varying conditions. Our solution utilizes Reinforcement Learning (RL) to allocate resources to the slices. The utilization of restricted resources can be optimized through the reconfiguration of slices. The ability of RL to acquire knowledge from the surroundings enables our solution to adjust to varying network conditions, enhance the allocation of resources and improve quality of service over a period of time for different network slices. This study introduces the Deep Actor Critic Reinforcement Learning- Network Slicing (DACRL-NS) technique, which utilizes Deep Actor Critic Reinforcement learning for efficient resource allocation to network slices. The objective is to achieve optimal throughput in the network. If the slices fail to meet the minimum criteria, they will be omitted from the allocation. With increasing training episodes, our Actor-Critic algorithm enhances average cumulative rewards and resource allocation efficiency, demonstrating continuous learning and improved decision-making.The simulated suggested system demonstrates an average throughput improvement of 8.92% and 16.36% with respect to the rate requirement and latency requirement, respectively. The data also demonstrate a 17.14% increase in the overall network throughput.</p>\",\"PeriodicalId\":23827,\"journal\":{\"name\":\"Wireless Personal Communications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-08-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Wireless Personal Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11277-024-11526-0\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wireless Personal Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11277-024-11526-0","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Actor Critic Based Reinforcement Learning for Joint Resource Allocation and Throughput Maximization in 5G RAN Slicing
With the advent of fifth generation (5G) mobile communication network slicing technology, the range of application scenarios is expanding significantly. For 5G to function well, it necessitates little delay, a fast rate of data transfer, and the ability to handle a large number of connections. This demanding service requires the allocation of resources in a dynamic manner, while maintaining a very high level of reliability in terms of Quality of Service (QoS).The applications like autonomous driving, telesurgery, etc. have stringent QoS demands and the present design of slices is not suitable for these services. Therefore, latency has been regarded as a crucial factor in the design of the slices. Conventional optimization algorithms often lack robustness and adaptability to dynamic environments, getting stuck in local optima and failing to generalize to varying conditions. Our solution utilizes Reinforcement Learning (RL) to allocate resources to the slices. The utilization of restricted resources can be optimized through the reconfiguration of slices. The ability of RL to acquire knowledge from the surroundings enables our solution to adjust to varying network conditions, enhance the allocation of resources and improve quality of service over a period of time for different network slices. This study introduces the Deep Actor Critic Reinforcement Learning- Network Slicing (DACRL-NS) technique, which utilizes Deep Actor Critic Reinforcement learning for efficient resource allocation to network slices. The objective is to achieve optimal throughput in the network. If the slices fail to meet the minimum criteria, they will be omitted from the allocation. With increasing training episodes, our Actor-Critic algorithm enhances average cumulative rewards and resource allocation efficiency, demonstrating continuous learning and improved decision-making.The simulated suggested system demonstrates an average throughput improvement of 8.92% and 16.36% with respect to the rate requirement and latency requirement, respectively. The data also demonstrate a 17.14% increase in the overall network throughput.
期刊介绍:
The Journal on Mobile Communication and Computing ...
Publishes tutorial, survey, and original research papers addressing mobile communications and computing;
Investigates theoretical, engineering, and experimental aspects of radio communications, voice, data, images, and multimedia;
Explores propagation, system models, speech and image coding, multiple access techniques, protocols, performance evaluation, radio local area networks, and networking and architectures, etc.;
98% of authors who answered a survey reported that they would definitely publish or probably publish in the journal again.
Wireless Personal Communications is an archival, peer reviewed, scientific and technical journal addressing mobile communications and computing. It investigates theoretical, engineering, and experimental aspects of radio communications, voice, data, images, and multimedia. A partial list of topics included in the journal is: propagation, system models, speech and image coding, multiple access techniques, protocols performance evaluation, radio local area networks, and networking and architectures.
In addition to the above mentioned areas, the journal also accepts papers that deal with interdisciplinary aspects of wireless communications along with: big data and analytics, business and economy, society, and the environment.
The journal features five principal types of papers: full technical papers, short papers, technical aspects of policy and standardization, letters offering new research thoughts and experimental ideas, and invited papers on important and emerging topics authored by renowned experts.