Pub Date : 2025-04-23DOI: 10.1109/TGCN.2025.3563625
Fulai Liu;Hai Huang;Ruxin Liu;Jinwei Yang;Luyao Suo;Ruiyan Du
In the case of interference perturbation, the wideband adaptive beamforming (WAB) weight vector may be mismatched, which leads to the decrease of interference suppression ability. To improve communication quality under the interference position perturbation, this paper presents a multi-head self-attention conventional neural network (MHSA-CNN)-based WAB algorithm with null broadening. In the presented approach, a MHSA-CNN structure is proposed to improve the prediction accuracy of beamforming weight vector in the case of interference perturbation. Specifically, by processing multiple attention heads in parallel to obtain the information of different signal subspaces, MHSA mechanism enables the network to dynamically adjust the attention distribution of signal features and effectively extract the global features of the covariance matrix. Then, based on focused reconstruction and null broadening, an effective neural network training label is used to enhance the ability of suppressing interferences. Finally, the well-trained MHSA-CNN can accurately output the weight vector suitable for WAB with null broadening in real time. Simulation results demonstrate that the proposed algorithm can suppress interferences accurately within the interference perturbation range and enhance the output signal-to-interference-plus-noise ratio while ensuring real-time communication performance.
{"title":"Efficient Wideband Adaptive Beamforming With Null Broadening Using MHSA-CNN","authors":"Fulai Liu;Hai Huang;Ruxin Liu;Jinwei Yang;Luyao Suo;Ruiyan Du","doi":"10.1109/TGCN.2025.3563625","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3563625","url":null,"abstract":"In the case of interference perturbation, the wideband adaptive beamforming (WAB) weight vector may be mismatched, which leads to the decrease of interference suppression ability. To improve communication quality under the interference position perturbation, this paper presents a multi-head self-attention conventional neural network (MHSA-CNN)-based WAB algorithm with null broadening. In the presented approach, a MHSA-CNN structure is proposed to improve the prediction accuracy of beamforming weight vector in the case of interference perturbation. Specifically, by processing multiple attention heads in parallel to obtain the information of different signal subspaces, MHSA mechanism enables the network to dynamically adjust the attention distribution of signal features and effectively extract the global features of the covariance matrix. Then, based on focused reconstruction and null broadening, an effective neural network training label is used to enhance the ability of suppressing interferences. Finally, the well-trained MHSA-CNN can accurately output the weight vector suitable for WAB with null broadening in real time. Simulation results demonstrate that the proposed algorithm can suppress interferences accurately within the interference perturbation range and enhance the output signal-to-interference-plus-noise ratio while ensuring real-time communication performance.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2319-2328"},"PeriodicalIF":6.7,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-21DOI: 10.1109/TGCN.2025.3562895
Yared Abera Ergu;Van-Linh Nguyen
The advent of open radio access networks (O-RAN) has introduced intelligent, flexible, and multi-vendor network ecosystems. While O-RAN’s open interfaces and artificial intelligence (AI)-driven solutions offer improved performance, energy efficiency, and resource minimization for green networking, they also expose the system to new security vulnerabilities, particularly adversarial attacks. This paper presents a robust defense approach, termed RADAR, designed to secure deep reinforcement learning (DRL)-powered resource allocation mechanisms in O-RAN. RADAR is a multi-faceted defense framework that integrates adversarial input sanitization, proactive adversarial training, and adapted defensive distillation to counter policy infiltration attacks, gradient-based deceptive loss maximization, and signal perturbation injections into the O-CU via the O-DU in O-RAN. This study evaluates the effectiveness of RADAR not only against a novel attack variant—policy infiltration attack (PIA), which manipulates environmental parameters to disrupt allocation decisions, but also against well-known adversarial techniques such as the fast gradient sign method (FGSM) and projected gradient descent (PGD). Experimental results demonstrate that RADAR achieves significant recovery in user data rates across three network slices: 73.33% for eMBB, 64.71% for mMTC and 52.94% for uRLLC, outperforming the existing standalone approach. The findings highlight RADAR’s effectiveness in mitigating adversarial attack techniques, underscoring its potential to secure AI-driven core functions in intelligent O-RAN.
{"title":"RADAR: Robust DRL-Based Resource Allocation Against Adversarial Attacks in Intelligent O-RAN","authors":"Yared Abera Ergu;Van-Linh Nguyen","doi":"10.1109/TGCN.2025.3562895","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3562895","url":null,"abstract":"The advent of open radio access networks (O-RAN) has introduced intelligent, flexible, and multi-vendor network ecosystems. While O-RAN’s open interfaces and artificial intelligence (AI)-driven solutions offer improved performance, energy efficiency, and resource minimization for green networking, they also expose the system to new security vulnerabilities, particularly adversarial attacks. This paper presents a robust defense approach, termed RADAR, designed to secure deep reinforcement learning (DRL)-powered resource allocation mechanisms in O-RAN. RADAR is a multi-faceted defense framework that integrates adversarial input sanitization, proactive adversarial training, and adapted defensive distillation to counter policy infiltration attacks, gradient-based deceptive loss maximization, and signal perturbation injections into the O-CU via the O-DU in O-RAN. This study evaluates the effectiveness of RADAR not only against a novel attack variant—policy infiltration attack (PIA), which manipulates environmental parameters to disrupt allocation decisions, but also against well-known adversarial techniques such as the fast gradient sign method (FGSM) and projected gradient descent (PGD). Experimental results demonstrate that RADAR achieves significant recovery in user data rates across three network slices: 73.33% for eMBB, 64.71% for mMTC and 52.94% for uRLLC, outperforming the existing standalone approach. The findings highlight RADAR’s effectiveness in mitigating adversarial attack techniques, underscoring its potential to secure AI-driven core functions in intelligent O-RAN.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2305-2318"},"PeriodicalIF":6.7,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-15DOI: 10.1109/TGCN.2025.3560652
Arhum Ahmad;Satyam Agarwal
This paper introduces a novel machine learning-based receiver for symbol detection in a Multiple-Input Single-Output system, optimized for next-generation vehicular networks. The receiver operates without channel state information (CSI), leveraging an innovative feature selection strategy that enhances its adaptability to dynamic, real-world communication environments. Key components include Neural Adaptive Symbol Detection (NASD), which provides an initial detection framework, and the Context-Enhanced Symbol Detector (CESD), a fine-tuning mechanism that dynamically adjusts to varying signal conditions. These innovations equip the receiver with robustness against unpredictable vehicular communication challenges, such as rapid movement, Doppler effects, and multipath fading. The system is evaluated using testbed featuring a custom-built UAV to emulate complex vehicle dynamics. This setup enables rigorous testing under a variety of conditions, including static, maneuvering, and hovering scenarios. Experimental results demonstrate the receiver’s ability to sustain low bit error rates across a wide range of signal-to-noise ratios, significantly outperforming non-adaptive methods, especially in dynamic environments. The combination of NASD and CESD facilitates real-time adaptation without the need for CSI or extensive pre-training, establishing this approach as an efficient, low-complexity receiver solution for modern vehicular communication systems.
{"title":"Adaptive ML MISO Receiver: Conditional Fine-Tuning Without CSI","authors":"Arhum Ahmad;Satyam Agarwal","doi":"10.1109/TGCN.2025.3560652","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3560652","url":null,"abstract":"This paper introduces a novel machine learning-based receiver for symbol detection in a Multiple-Input Single-Output system, optimized for next-generation vehicular networks. The receiver operates without channel state information (CSI), leveraging an innovative feature selection strategy that enhances its adaptability to dynamic, real-world communication environments. Key components include Neural Adaptive Symbol Detection (NASD), which provides an initial detection framework, and the Context-Enhanced Symbol Detector (CESD), a fine-tuning mechanism that dynamically adjusts to varying signal conditions. These innovations equip the receiver with robustness against unpredictable vehicular communication challenges, such as rapid movement, Doppler effects, and multipath fading. The system is evaluated using testbed featuring a custom-built UAV to emulate complex vehicle dynamics. This setup enables rigorous testing under a variety of conditions, including static, maneuvering, and hovering scenarios. Experimental results demonstrate the receiver’s ability to sustain low bit error rates across a wide range of signal-to-noise ratios, significantly outperforming non-adaptive methods, especially in dynamic environments. The combination of NASD and CESD facilitates real-time adaptation without the need for CSI or extensive pre-training, establishing this approach as an efficient, low-complexity receiver solution for modern vehicular communication systems.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2292-2304"},"PeriodicalIF":6.7,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-14DOI: 10.1109/TGCN.2025.3559505
Ran Wang;Rixin Wu;Linfeng Liu;Changyan Yi;Kun Zhu;Ping Wang;Dusit Niyato
The increasing demands of data computation and storage for cloud-based services motivate the development and deployment of large-scale data centers (DCs). The energy demand of these devices is rising rapidly and becoming a noticeable challenge for current power networks. The smart grid (SG) is deemed as the future power system paradigm enabling more affordable and sustainable energy supply, which can effectively relieve the load pressure from DCs. Moreover, with growing concerns regarding harmful emissions due to combustion of fossil fuels, the exploitation of renewable energy sources (RES) has attracted extensive attention, which can benefit SGs and DCs, as well as society at large. However, the geo-distributed property of DCs and SGs and the uncertain nature of RES production pose severe challenges to the optimal management of computation and energy resources in such a tripartite coupling system. Focusing on these issues, a joint energy and computation workload management framework is proposed for enabling a sustainable DC paradigm with distributed RES. Specifically, a three-layer game is formulated to model the iterations among entities including the energy market, data center operators (DCOs), and SGs. The market includes a certain amount of RES that must be dispatched. The SG offers the DCO an electricity selling price while simultaneously importing RES from the market at a buying price in order to maximize the benefit. The DCO allocates the workload to different DCs, aiming to minimize the costs of energy consumption and carbon emissions. The interactive processes between different entities are further decomposed into two coupling Stackelberg games. We obtain the equilibrium state of the game and prove its uniqueness and optimality. Simulation experiments are conducted to evaluate the performance of the joint energy and computation workload management scheme and show its superiority over counterparts in utilizing renewable energy and reducing emissions. Furthermore, the impacts of various parameters on the utility of the system are investigated carefully. The proposed approach and obtained results provide useful insights for helping the DCO developing rational management strategies.
{"title":"Joint Energy and Computation Workload Management for Geo-Distributed Data Centers","authors":"Ran Wang;Rixin Wu;Linfeng Liu;Changyan Yi;Kun Zhu;Ping Wang;Dusit Niyato","doi":"10.1109/TGCN.2025.3559505","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3559505","url":null,"abstract":"The increasing demands of data computation and storage for cloud-based services motivate the development and deployment of large-scale data centers (DCs). The energy demand of these devices is rising rapidly and becoming a noticeable challenge for current power networks. The smart grid (SG) is deemed as the future power system paradigm enabling more affordable and sustainable energy supply, which can effectively relieve the load pressure from DCs. Moreover, with growing concerns regarding harmful emissions due to combustion of fossil fuels, the exploitation of renewable energy sources (RES) has attracted extensive attention, which can benefit SGs and DCs, as well as society at large. However, the geo-distributed property of DCs and SGs and the uncertain nature of RES production pose severe challenges to the optimal management of computation and energy resources in such a tripartite coupling system. Focusing on these issues, a joint energy and computation workload management framework is proposed for enabling a sustainable DC paradigm with distributed RES. Specifically, a three-layer game is formulated to model the iterations among entities including the energy market, data center operators (DCOs), and SGs. The market includes a certain amount of RES that must be dispatched. The SG offers the DCO an electricity selling price while simultaneously importing RES from the market at a buying price in order to maximize the benefit. The DCO allocates the workload to different DCs, aiming to minimize the costs of energy consumption and carbon emissions. The interactive processes between different entities are further decomposed into two coupling Stackelberg games. We obtain the equilibrium state of the game and prove its uniqueness and optimality. Simulation experiments are conducted to evaluate the performance of the joint energy and computation workload management scheme and show its superiority over counterparts in utilizing renewable energy and reducing emissions. Furthermore, the impacts of various parameters on the utility of the system are investigated carefully. The proposed approach and obtained results provide useful insights for helping the DCO developing rational management strategies.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2115-2128"},"PeriodicalIF":6.7,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11DOI: 10.1109/TGCN.2025.3560143
Rui Luo;Weidong Gao;Xu Zhao;Kaisa Zhang;Xiangyu Chen;Yuan Guan;Siqi Liu;Jingwen Liu
The integration of renewable energy resources in microgrid productively contributes to reducing the emission of greenhouse gases, but inherently increases the complexity of energy management. Capable of rapid-response characteristic, the deep reinforcement learning (DRL) algorithm could be applied to provide real-time energy scheduling. However, due to the limitation of restricted training data and ignoring of the impact on the environment, most DRL-based schemes fail to get comprehensive solutions. To overcome this, we proposed a two-stage scheme, namely GAN-DDPG energy dispatch scheme, which utilizes the benefits of both the generative adversarial networks (GAN) and an enhanced deep deterministic policy gradient algorithm, namely CE-DDPG algorithm. In the first stage, a trained GAN is used to generate sufficient training data for the training process of the CE-DDPG algorithm. Then, the microgrid controller could invoke the trained CE-DDPG algorithm to obtain a real-time scheduling with efficient carbon emissions reductions. Different from the traditional DRL algorithm, a novel reward function is proposed in the CE-DDPG algorithm, promoting the scheduling of the energy storage system (ESS) with more correct actions. Numerical simulations demonstrated that the proposed GAN-DDPG scheme could reduce the cumulative cost up to 35% with less carbon emissions of 23% compared to existing schemes.
{"title":"A Two-Stage Green Energy Dispatch Scheme for Microgrid Using Deep Reinforcement Learning","authors":"Rui Luo;Weidong Gao;Xu Zhao;Kaisa Zhang;Xiangyu Chen;Yuan Guan;Siqi Liu;Jingwen Liu","doi":"10.1109/TGCN.2025.3560143","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3560143","url":null,"abstract":"The integration of renewable energy resources in microgrid productively contributes to reducing the emission of greenhouse gases, but inherently increases the complexity of energy management. Capable of rapid-response characteristic, the deep reinforcement learning (DRL) algorithm could be applied to provide real-time energy scheduling. However, due to the limitation of restricted training data and ignoring of the impact on the environment, most DRL-based schemes fail to get comprehensive solutions. To overcome this, we proposed a two-stage scheme, namely GAN-DDPG energy dispatch scheme, which utilizes the benefits of both the generative adversarial networks (GAN) and an enhanced deep deterministic policy gradient algorithm, namely CE-DDPG algorithm. In the first stage, a trained GAN is used to generate sufficient training data for the training process of the CE-DDPG algorithm. Then, the microgrid controller could invoke the trained CE-DDPG algorithm to obtain a real-time scheduling with efficient carbon emissions reductions. Different from the traditional DRL algorithm, a novel reward function is proposed in the CE-DDPG algorithm, promoting the scheduling of the energy storage system (ESS) with more correct actions. Numerical simulations demonstrated that the proposed GAN-DDPG scheme could reduce the cumulative cost up to 35% with less carbon emissions of 23% compared to existing schemes.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2279-2291"},"PeriodicalIF":6.7,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1109/TGCN.2025.3570064
{"title":"IEEE Communications Society Information","authors":"","doi":"10.1109/TGCN.2025.3570064","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3570064","url":null,"abstract":"","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"C3-C3"},"PeriodicalIF":5.3,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008660","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144108287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1109/TGCN.2025.3570062
{"title":"IEEE Transactions on Green Communications and Networking","authors":"","doi":"10.1109/TGCN.2025.3570062","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3570062","url":null,"abstract":"","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 2","pages":"C2-C2"},"PeriodicalIF":5.3,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11008659","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144117179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The user-centric (UC) association in optical wireless communication (OWC) forms amorphous cells (A-Cells) by considering the dynamic distribution and load demand of user equipments (UEs). This philosophy offers advantages over the conventional network-centric (NC) association that purely relies on a pre-defined and fixed network configuration, in terms of alleviating undesired inter-cell interference (ICI) and achieving superior system performance. However, constructing the optimal A-Cells for a given OWC network, including determining the appropriate number of A-Cells associated to their contained UEs, is deeply integrated with the UEs’ distribution and transmission conditions. To address the intractable issue, in this paper, we conceive an adaptive UC-OWC network that relies on a feedback-guided iterative framework, which is capable of jointly optimizing A-Cells formation, modulation-mode assignment and power allocation strategies. For the sake of attaining the optimized throughput of this adaptive network, we initialize the UC association by the designed k-means based genetic algorithm (KGA), which can then be iteratively adjusted based on the throughput feedback obtained via our proposed multi-user cross Q-learning (MUCQ) resource allocation algorithm. Simulation results indicate that, compared to conventional counterparts, our adaptive UC-OWC network is able to significantly improve throughput performance and reduce outage probability.
{"title":"A Cross Q-Learning Assisted Resource Allocation for User-Centric Optical Wireless Communication Networks","authors":"Simeng Feng;Nian Li;Kai Liu;Baolong Li;Chao Dong;Qihui Wu","doi":"10.1109/TGCN.2025.3553202","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3553202","url":null,"abstract":"The user-centric (UC) association in optical wireless communication (OWC) forms amorphous cells (A-Cells) by considering the dynamic distribution and load demand of user equipments (UEs). This philosophy offers advantages over the conventional network-centric (NC) association that purely relies on a pre-defined and fixed network configuration, in terms of alleviating undesired inter-cell interference (ICI) and achieving superior system performance. However, constructing the optimal A-Cells for a given OWC network, including determining the appropriate number of A-Cells associated to their contained UEs, is deeply integrated with the UEs’ distribution and transmission conditions. To address the intractable issue, in this paper, we conceive an adaptive UC-OWC network that relies on a feedback-guided iterative framework, which is capable of jointly optimizing A-Cells formation, modulation-mode assignment and power allocation strategies. For the sake of attaining the optimized throughput of this adaptive network, we initialize the UC association by the designed k-means based genetic algorithm (KGA), which can then be iteratively adjusted based on the throughput feedback obtained via our proposed multi-user cross Q-learning (MUCQ) resource allocation algorithm. Simulation results indicate that, compared to conventional counterparts, our adaptive UC-OWC network is able to significantly improve throughput performance and reduce outage probability.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2264-2278"},"PeriodicalIF":6.7,"publicationDate":"2025-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we aim to save the total energy consumption of servers through elastic scaling of CPU resources in container cloud. To be practical, we propose an online scheduling method, which consists of three parts: container placement, vertical scaling and migration. 1) For container placement, we design an algorithm based on dynamic threshold, resource balancing and delayed running. When there are PMs (Physical Machines) turned on, the CPU threshold increases so that the containers can be placed onto fewest possible PMs. To make full use of multi-dimensional resources of PM, we put forward a resource balancing strategy. Since the number of CPU cores can be scaled dynamically in containers’ run time, the start time of containers can be delayed without violating deadlines. 2) For vertical scaling, a collaborative multi-agent reinforcement learning (MARL) algorithm is proposed to adjust the container’s CPU, so that the containers on the same PM can finish simultaneously if possible. Then, the PM can be turned off to save energy. 3) To further reduce total energy consumption, we consider migrating the containers from underloaded PMs and overloaded PMs. Experiment results show the superior performance of our method to that of the state-of-the-art.
{"title":"Elastic Scaling of Resources for Energy-Efficient Container Cloud Using Reinforcement Learning","authors":"Yanyu Shen;Chonglin Gu;Xin Chen;Xiaoyu Gao;Zaixing Sun;Hejiao Huang","doi":"10.1109/TGCN.2025.3552594","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3552594","url":null,"abstract":"In this paper, we aim to save the total energy consumption of servers through elastic scaling of CPU resources in container cloud. To be practical, we propose an online scheduling method, which consists of three parts: container placement, vertical scaling and migration. 1) For container placement, we design an algorithm based on dynamic threshold, resource balancing and delayed running. When there are PMs (Physical Machines) turned on, the CPU threshold increases so that the containers can be placed onto fewest possible PMs. To make full use of multi-dimensional resources of PM, we put forward a resource balancing strategy. Since the number of CPU cores can be scaled dynamically in containers’ run time, the start time of containers can be delayed without violating deadlines. 2) For vertical scaling, a collaborative multi-agent reinforcement learning (MARL) algorithm is proposed to adjust the container’s CPU, so that the containers on the same PM can finish simultaneously if possible. Then, the PM can be turned off to save energy. 3) To further reduce total energy consumption, we consider migrating the containers from underloaded PMs and overloaded PMs. Experiment results show the superior performance of our method to that of the state-of-the-art.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2249-2263"},"PeriodicalIF":6.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-12DOI: 10.1109/TGCN.2025.3550599
Lin Wang;Jiasheng Wu;Jingjing Zhang;Yue Gao
Cloud native technology has revolutionized 5G beyond and 6G communication networks, offering unprecedented levels of operational automation, flexibility, and adaptability. However, the vast array of cloud native services and applications presents a new challenge in resource allocation for dynamic cloud computing environments. To tackle this challenge, we investigate a cloud native wireless architecture that employs container-based virtualization to enable flexible service deployment. We then study two representative use cases: network slicing and multi-access edge computing. To improve resource allocation and maximize utilization efficiency in these scenarios, we propose two deep reinforcement learning-based algorithms that enhance resource allocation efficiency and network resource utilization by leveraging comprehensive observational data to guide and refine the allocation policies. We validate the effectiveness of our algorithms in a testbed developed using Free5gc. Our findings demonstrate significant improvements in network efficiency, underscoring the potential of our proposed techniques in unlocking the full potential of cloud native wireless networks.
{"title":"Efficient Deep Reinforcement Learning-Based Resource Allocation for Cloud Native Wireless Network","authors":"Lin Wang;Jiasheng Wu;Jingjing Zhang;Yue Gao","doi":"10.1109/TGCN.2025.3550599","DOIUrl":"https://doi.org/10.1109/TGCN.2025.3550599","url":null,"abstract":"Cloud native technology has revolutionized 5G beyond and 6G communication networks, offering unprecedented levels of operational automation, flexibility, and adaptability. However, the vast array of cloud native services and applications presents a new challenge in resource allocation for dynamic cloud computing environments. To tackle this challenge, we investigate a cloud native wireless architecture that employs container-based virtualization to enable flexible service deployment. We then study two representative use cases: network slicing and multi-access edge computing. To improve resource allocation and maximize utilization efficiency in these scenarios, we propose two deep reinforcement learning-based algorithms that enhance resource allocation efficiency and network resource utilization by leveraging comprehensive observational data to guide and refine the allocation policies. We validate the effectiveness of our algorithms in a testbed developed using Free5gc. Our findings demonstrate significant improvements in network efficiency, underscoring the potential of our proposed techniques in unlocking the full potential of cloud native wireless networks.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"9 4","pages":"2236-2248"},"PeriodicalIF":6.7,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145646090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}