{"title":"基于Dueling双深度q网络的云RAN能量和频谱效率优化","authors":"Amjad Iqbal, Mau-Luen Tham, Yoong Choon Chang","doi":"10.1109/I2CACIS52118.2021.9495912","DOIUrl":null,"url":null,"abstract":"Cloud radio access network (CRAN) has gained considerable attention for the upcoming cellular network that can offload the mobile data traffic and reduce energy consumption by deploying intelligent distributed multiple remote radio units (RRHs). However, it is still very challenging to achieve an optimal global strategy to maximize the performance of energy efficiency (EE) and spectral efficiency (SE) simultaneously due to non-convex and combinatorial features. Deep reinforcement learning (DRL)-based framework becomes an imperative solution to jointly maximize the EE-SE performance and guarantee the user quality of service (QoS) demands in downlink CRAN. Furthermore, in order to deal with the large state-action space problem, we leverage dueling double deep Q-network (D3QN) to achieve the nearly optimal control strategy. In the end, extensive simulation results demonstrate the effectiveness of the proposed D3QN method over the conventional-DRL methods.","PeriodicalId":210770,"journal":{"name":"2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Energy- and Spectral- Efficient Optimization in Cloud RAN based on Dueling Double Deep Q-Network\",\"authors\":\"Amjad Iqbal, Mau-Luen Tham, Yoong Choon Chang\",\"doi\":\"10.1109/I2CACIS52118.2021.9495912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Cloud radio access network (CRAN) has gained considerable attention for the upcoming cellular network that can offload the mobile data traffic and reduce energy consumption by deploying intelligent distributed multiple remote radio units (RRHs). However, it is still very challenging to achieve an optimal global strategy to maximize the performance of energy efficiency (EE) and spectral efficiency (SE) simultaneously due to non-convex and combinatorial features. Deep reinforcement learning (DRL)-based framework becomes an imperative solution to jointly maximize the EE-SE performance and guarantee the user quality of service (QoS) demands in downlink CRAN. Furthermore, in order to deal with the large state-action space problem, we leverage dueling double deep Q-network (D3QN) to achieve the nearly optimal control strategy. In the end, extensive simulation results demonstrate the effectiveness of the proposed D3QN method over the conventional-DRL methods.\",\"PeriodicalId\":210770,\"journal\":{\"name\":\"2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS)\",\"volume\":\"40 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/I2CACIS52118.2021.9495912\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Automatic Control & Intelligent Systems (I2CACIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/I2CACIS52118.2021.9495912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Energy- and Spectral- Efficient Optimization in Cloud RAN based on Dueling Double Deep Q-Network
Cloud radio access network (CRAN) has gained considerable attention for the upcoming cellular network that can offload the mobile data traffic and reduce energy consumption by deploying intelligent distributed multiple remote radio units (RRHs). However, it is still very challenging to achieve an optimal global strategy to maximize the performance of energy efficiency (EE) and spectral efficiency (SE) simultaneously due to non-convex and combinatorial features. Deep reinforcement learning (DRL)-based framework becomes an imperative solution to jointly maximize the EE-SE performance and guarantee the user quality of service (QoS) demands in downlink CRAN. Furthermore, in order to deal with the large state-action space problem, we leverage dueling double deep Q-network (D3QN) to achieve the nearly optimal control strategy. In the end, extensive simulation results demonstrate the effectiveness of the proposed D3QN method over the conventional-DRL methods.