Pub Date : 2024-01-30DOI: 10.1109/TGCN.2024.3360242
Chang Liu;Jun-Bo Wang;Cheng Zeng;Yijian Chen;Hongkang Yu;Yijin Pan
Multi-access edge computing (MEC) and wireless power transfer (WPT) have emerged as promising paradigms to address the bottlenecks of computing power and battery capacity of mobile devices. In this paper, we investigate the integrated scheduling of WPT and task offloading in a rechargeable multi-access edge computing network (RMECN). Specifically, we focus on exploring the tradeoff between energy efficiency, buffer stability, and battery level stability in the RMECN to obtain reasonable scheduling. In addition, we adopt a dynamic Li-ion battery model to describe the charge/discharge characteristics. Given the stochastic nature of channel states and task arrivals, we formulate a stochastic optimization problem that minimizes system energy consumption while ensuring buffer and battery level stability. In this problem, we jointly consider offloading decisions, local central processing unit (CPU) frequency, transmission power, and current of charge/discharge as optimization variables. To solve this stochastic non-convex problem, we first transform it into an online optimization problem using the Lyapunov optimization theory. Then, we propose a distributed algorithm based on game theory to overcome the excessive computation and time consumption of traditional centralized optimization algorithms. The numerical results demonstrate that the proposed tradeoff scheme and corresponding algorithm can effectively reduce the system’s energy consumption while ensuring the stability of buffer and battery level.
{"title":"Joint Optimization of Transmission and Computation Resources for Rechargeable Multi-Access Edge Computing Networks","authors":"Chang Liu;Jun-Bo Wang;Cheng Zeng;Yijian Chen;Hongkang Yu;Yijin Pan","doi":"10.1109/TGCN.2024.3360242","DOIUrl":"https://doi.org/10.1109/TGCN.2024.3360242","url":null,"abstract":"Multi-access edge computing (MEC) and wireless power transfer (WPT) have emerged as promising paradigms to address the bottlenecks of computing power and battery capacity of mobile devices. In this paper, we investigate the integrated scheduling of WPT and task offloading in a rechargeable multi-access edge computing network (RMECN). Specifically, we focus on exploring the tradeoff between energy efficiency, buffer stability, and battery level stability in the RMECN to obtain reasonable scheduling. In addition, we adopt a dynamic Li-ion battery model to describe the charge/discharge characteristics. Given the stochastic nature of channel states and task arrivals, we formulate a stochastic optimization problem that minimizes system energy consumption while ensuring buffer and battery level stability. In this problem, we jointly consider offloading decisions, local central processing unit (CPU) frequency, transmission power, and current of charge/discharge as optimization variables. To solve this stochastic non-convex problem, we first transform it into an online optimization problem using the Lyapunov optimization theory. Then, we propose a distributed algorithm based on game theory to overcome the excessive computation and time consumption of traditional centralized optimization algorithms. The numerical results demonstrate that the proposed tradeoff scheme and corresponding algorithm can effectively reduce the system’s energy consumption while ensuring the stability of buffer and battery level.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 3","pages":"1259-1272"},"PeriodicalIF":5.3,"publicationDate":"2024-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-29DOI: 10.1109/TGCN.2024.3359208
Siting Lv;Xiaohui Li;Jiawen Liu;Mingli Shi
This paper focuses on a deep learning (DL) framework for the Sub-6G aided millimeter-wave (mmWave) communication system, aiming to reduce the overhead of mmWave systems. The proposed framework consists of two-stage cascaded networks, named HestNet and HBFNet, for mmWave channel estimation and hybrid beamforming (HBF) design, respectively. The number of parameters for channel estimation is reduced by using channel covariance matrix (CCM) estimation instead. However, a new challenge of estimating high-dimensional data from low-dimensional data should be considered since the dimension of Sub-6G channel data is much smaller than that of mmWave. Subsequently, a data deformation approach is introduced into the framework to match the size of Sub-6G channel data with that of mmWave. The simulation results show that the application of statistical channel information based on Sub-6G channel information to aid mmWave communication is reasonable and effective, it achieves good estimation performance and spectral efficiency. Moreover, the two-stage cascaded network architecture proposed in this paper is also more robust to channel estimation errors.
{"title":"Sub-6G Aided Millimeter Wave Hybrid Beamforming: A Two-Stage Deep Learning Framework With Statistical Channel Information","authors":"Siting Lv;Xiaohui Li;Jiawen Liu;Mingli Shi","doi":"10.1109/TGCN.2024.3359208","DOIUrl":"https://doi.org/10.1109/TGCN.2024.3359208","url":null,"abstract":"This paper focuses on a deep learning (DL) framework for the Sub-6G aided millimeter-wave (mmWave) communication system, aiming to reduce the overhead of mmWave systems. The proposed framework consists of two-stage cascaded networks, named HestNet and HBFNet, for mmWave channel estimation and hybrid beamforming (HBF) design, respectively. The number of parameters for channel estimation is reduced by using channel covariance matrix (CCM) estimation instead. However, a new challenge of estimating high-dimensional data from low-dimensional data should be considered since the dimension of Sub-6G channel data is much smaller than that of mmWave. Subsequently, a data deformation approach is introduced into the framework to match the size of Sub-6G channel data with that of mmWave. The simulation results show that the application of statistical channel information based on Sub-6G channel information to aid mmWave communication is reasonable and effective, it achieves good estimation performance and spectral efficiency. Moreover, the two-stage cascaded network architecture proposed in this paper is also more robust to channel estimation errors.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 3","pages":"1245-1258"},"PeriodicalIF":5.3,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142123063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-24DOI: 10.1109/TGCN.2024.3358230
Hrishikesh Dutta;Amit Kumar Bhuyan;Subir Biswas
Efficient slot allocation and transmit-sleep scheduling is an effective access control mechanism for improving communication performance and network lifetime in resource-constrained wireless networks. In this paper, a decentralized and multi-tier framework is presented for joint slot allocation and transmit-sleep scheduling in wireless network nodes with thin energy budget. The key learning objectives of this architecture are: collision-free transmission scheduling, reducing energy consumption, and improving network performance. This is achieved using a cooperative and decentralized learning behavior of multiple Reinforcement Learning (RL) agents. The resulting architecture provides throughput-sustainable support for data flows while minimizing energy expenditure and sleep-induced packet losses. To achieve this, a concept of Context is introduced to the RL framework in order to capture network traffic dynamics. The resulting Contextual Deep Q-Learning (CDQL) model makes the system adaptive to dynamic and heterogeneous network load. It also improves energy efficiency when compared with the traditional tabular Q-learning-based approaches. The results demonstrate how this framework can be used for prioritizing application-specific requirements, namely, energy saving and communication reliability. The trade-offs among packet drop, energy expenditure, and learning convergence are studied, and an application-specific solution is proposed for managing them. The performance is compared against an existing state-of-the-art scheduling approach. Moreover, an analytical model of the system dynamics is developed and validated using simulation for arbitrary mesh topologies and traffic patterns.
{"title":"Contextual Deep Reinforcement Learning for Flow and Energy Management in Wireless Sensor and IoT Networks","authors":"Hrishikesh Dutta;Amit Kumar Bhuyan;Subir Biswas","doi":"10.1109/TGCN.2024.3358230","DOIUrl":"https://doi.org/10.1109/TGCN.2024.3358230","url":null,"abstract":"Efficient slot allocation and transmit-sleep scheduling is an effective access control mechanism for improving communication performance and network lifetime in resource-constrained wireless networks. In this paper, a decentralized and multi-tier framework is presented for joint slot allocation and transmit-sleep scheduling in wireless network nodes with thin energy budget. The key learning objectives of this architecture are: collision-free transmission scheduling, reducing energy consumption, and improving network performance. This is achieved using a cooperative and decentralized learning behavior of multiple Reinforcement Learning (RL) agents. The resulting architecture provides throughput-sustainable support for data flows while minimizing energy expenditure and sleep-induced packet losses. To achieve this, a concept of Context is introduced to the RL framework in order to capture network traffic dynamics. The resulting Contextual Deep Q-Learning (CDQL) model makes the system adaptive to dynamic and heterogeneous network load. It also improves energy efficiency when compared with the traditional tabular Q-learning-based approaches. The results demonstrate how this framework can be used for prioritizing application-specific requirements, namely, energy saving and communication reliability. The trade-offs among packet drop, energy expenditure, and learning convergence are studied, and an application-specific solution is proposed for managing them. The performance is compared against an existing state-of-the-art scheduling approach. Moreover, an analytical model of the system dynamics is developed and validated using simulation for arbitrary mesh topologies and traffic patterns.","PeriodicalId":13052,"journal":{"name":"IEEE Transactions on Green Communications and Networking","volume":"8 3","pages":"1233-1244"},"PeriodicalIF":5.3,"publicationDate":"2024-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142121601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}