Pub Date : 2023-08-24DOI: 10.1109/TSUSC.2023.3308081
Xu Yang;Xuechao Yang;Junwei Luo;Xun Yi;Ibrahim Khalil;Shangqi Lai;Wei Wu;Albert Y. Zomaya
Reputation systems are widely used to provide a trustworthy environment and improve the sustainability of online discussions. They help users understand and evaluate the quality of information by collecting and counting feedback from different users. However, a common issue in most reputation systems is how to maintain users’ reputation and protect their anonymity simultaneously. In this paper, we introduce a new practical anonymous reputation system based on SGX. The establishment of an anonymous reputation system has a positive effect on sustainable trust in reputation-based online applications. Our system achieves the combination of reputation and anonymity by utilizing Intel SGX and the Bloom filter. The Path ORAM algorithm is also implemented to resist side-channel attacks. The experiments demonstrate that our system achieves high performance in terms of computation and storage costs. When compared to two state-of-the-art anonymous reputation systems, our system has better computation performance with at least three orders of magnitude.
{"title":"Towards Sustainable Trust: A Practical SGX Aided Anonymous Reputation System","authors":"Xu Yang;Xuechao Yang;Junwei Luo;Xun Yi;Ibrahim Khalil;Shangqi Lai;Wei Wu;Albert Y. Zomaya","doi":"10.1109/TSUSC.2023.3308081","DOIUrl":"10.1109/TSUSC.2023.3308081","url":null,"abstract":"Reputation systems are widely used to provide a trustworthy environment and improve the sustainability of online discussions. They help users understand and evaluate the quality of information by collecting and counting feedback from different users. However, a common issue in most reputation systems is how to maintain users’ reputation and protect their anonymity simultaneously. In this paper, we introduce a new practical anonymous reputation system based on SGX. The establishment of an anonymous reputation system has a positive effect on sustainable trust in reputation-based online applications. Our system achieves the combination of reputation and anonymity by utilizing Intel SGX and the Bloom filter. The Path ORAM algorithm is also implemented to resist side-channel attacks. The experiments demonstrate that our system achieves high performance in terms of computation and storage costs. When compared to two state-of-the-art anonymous reputation systems, our system has better computation performance with at least three orders of magnitude.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 1","pages":"88-99"},"PeriodicalIF":3.9,"publicationDate":"2023-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91098155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating larger shares of renewables in data centers’ electrical mix is mandatory to reduce their carbon footprint. However, as they are intermittent and fluctuating, renewable energies alone cannot provide a 24/7 supply and should be combined with a secondary source. Finding the optimal infrastructure configuration for both renewable production and financial costs remains difficult. In this article, we examine three scenarios with on-site renewable energy sources combined respectively with the electrical grid, batteries alone and batteries with hydrogen storage systems. The objectives are first, to size optimally the electric infrastructure using combinations of standard microgrids approaches, second to quantify the level of grid utilization when data centers consume/ export electricity from/to the grid, to determine the level of effort required from the grid operator, and finally to analyze the cost of 100% autonomy provided by the battery-based configurations and to discuss their economical viability. Our results show that in the grid-dependent mode, 63.1% of the generated electricity has to be injected into the grid and retrieved later. In the autonomous configurations, the cheapest one including hydrogen storage leads to a unit cost significantly more expensive than the electricity supplied from a national power system in many countries.
{"title":"Renewable Energy in Data Centers: The Dilemma of Electrical Grid Dependency and Autonomy Costs","authors":"Wedan Emmanuel Gnibga;Anne Blavette;Anne-Cécile Orgerie","doi":"10.1109/TSUSC.2023.3307790","DOIUrl":"10.1109/TSUSC.2023.3307790","url":null,"abstract":"Integrating larger shares of renewables in data centers’ electrical mix is mandatory to reduce their carbon footprint. However, as they are intermittent and fluctuating, renewable energies alone cannot provide a 24/7 supply and should be combined with a secondary source. Finding the optimal infrastructure configuration for both renewable production and financial costs remains difficult. In this article, we examine three scenarios with on-site renewable energy sources combined respectively with the electrical grid, batteries alone and batteries with hydrogen storage systems. The objectives are first, to size optimally the electric infrastructure using combinations of standard microgrids approaches, second to quantify the level of grid utilization when data centers consume/ export electricity from/to the grid, to determine the level of effort required from the grid operator, and finally to analyze the cost of 100% autonomy provided by the battery-based configurations and to discuss their economical viability. Our results show that in the grid-dependent mode, 63.1% of the generated electricity has to be injected into the grid and retrieved later. In the autonomous configurations, the cheapest one including hydrogen storage leads to a unit cost significantly more expensive than the electricity supplied from a national power system in many countries.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"315-328"},"PeriodicalIF":3.9,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87444789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As electric vehicle (EV) ownership becomes more commonplace, partly due to government incentives, there is a need also to design solutions such as energy allocation strategies to more effectively support sustainable vehicle-to-grid (V2G) applications. Therefore, this work proposes an energy allocation strategy, designed to minimize the electricity cost while improving the operating revenue. Specifically, V2G is abstracted as a three-domain network architecture to facilitate flexible, intelligent, and scalable energy allocation decision-making. Furthermore, this work combines virtual network embedding (VNE) and deep reinforcement learning (DRL) algorithms, where a DRL-based agent model is proposed, to adaptively perceives environmental features and extracts the feature matrix as input. In particular, the agent consists of a four-layer architecture for node and link embedding, and jointly optimizes the decision-making through a reward mechanism and gradient back-propagation. Finally, the effectiveness of the proposed strategy is demonstrated through simulation case studies. Specifically, compared to the used benchmarks, it improves the VNR acceptance ratio, Long-term average revenue, and Long-term average revenue-cost ratio indicators by an average of 3.17%, 191.36, and 2.04%, respectively. To the best of our knowledge, this is one of the first attempts combining VNE and DRL to provide an energy allocation strategy for V2G.
{"title":"Energy Allocation for Vehicle-to-Grid Settings: A Low-Cost Proposal Combining DRL and VNE","authors":"Peiying Zhang;Ning Chen;Neeraj Kumar;Laith Abualigah;Mohsen Guizani;Youxiang Duan;Jian Wang;Sheng Wu","doi":"10.1109/TSUSC.2023.3307551","DOIUrl":"10.1109/TSUSC.2023.3307551","url":null,"abstract":"As electric vehicle (EV) ownership becomes more commonplace, partly due to government incentives, there is a need also to design solutions such as energy allocation strategies to more effectively support sustainable vehicle-to-grid (V2G) applications. Therefore, this work proposes an energy allocation strategy, designed to minimize the electricity cost while improving the operating revenue. Specifically, V2G is abstracted as a three-domain network architecture to facilitate flexible, intelligent, and scalable energy allocation decision-making. Furthermore, this work combines virtual network embedding (VNE) and deep reinforcement learning (DRL) algorithms, where a DRL-based agent model is proposed, to adaptively perceives environmental features and extracts the feature matrix as input. In particular, the agent consists of a four-layer architecture for node and link embedding, and jointly optimizes the decision-making through a reward mechanism and gradient back-propagation. Finally, the effectiveness of the proposed strategy is demonstrated through simulation case studies. Specifically, compared to the used benchmarks, it improves the VNR acceptance ratio, Long-term average revenue, and Long-term average revenue-cost ratio indicators by an average of 3.17%, 191.36, and 2.04%, respectively. To the best of our knowledge, this is one of the first attempts combining VNE and DRL to provide an energy allocation strategy for V2G.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 1","pages":"75-87"},"PeriodicalIF":3.9,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77214489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-10DOI: 10.1109/TSUSC.2023.3303898
Long Cheng;Yue Wang;Feng Cheng;Cheng Liu;Zhiming Zhao;Ying Wang
With some specific characteristics such as elastics and scalability, cloud computing has become the most promising technology for online business nowadays. However, how to efficiently perform real-time job scheduling in cloud still poses significant challenges. The reason is that those jobs are highly dynamic and complex, and it is always hard to allocate them to computing resources in an optimal way, such as to meet the requirements from both service providers and users. In recent years, various works demonstrate that deep reinforcement learning (DRL) can handle real-time cloud jobs well in scheduling. However, to our knowledge, none of them has ever considered extra optimization opportunities for the allocated jobs in their scheduling frameworks. Given this fact, in this work, we introduce a novel DRL-based preemptive method for further improve the performance of the current studies. Specifically, we try to improve the training of scheduling policy with effective job preemptive mechanisms, and on that basis to optimize job execution cost while meeting users’ expected response time. We introduce the detailed design of our method, and our evaluations demonstrate that our approach can achieve better performance than other scheduling algorithms under different real-time workloads, including the DRL approach.
{"title":"A Deep Reinforcement Learning-Based Preemptive Approach for Cost-Aware Cloud Job Scheduling","authors":"Long Cheng;Yue Wang;Feng Cheng;Cheng Liu;Zhiming Zhao;Ying Wang","doi":"10.1109/TSUSC.2023.3303898","DOIUrl":"10.1109/TSUSC.2023.3303898","url":null,"abstract":"With some specific characteristics such as elastics and scalability, cloud computing has become the most promising technology for online business nowadays. However, how to efficiently perform real-time job scheduling in cloud still poses significant challenges. The reason is that those jobs are highly dynamic and complex, and it is always hard to allocate them to computing resources in an optimal way, such as to meet the requirements from both service providers and users. In recent years, various works demonstrate that deep reinforcement learning (DRL) can handle real-time cloud jobs well in scheduling. However, to our knowledge, none of them has ever considered extra optimization opportunities for the allocated jobs in their scheduling frameworks. Given this fact, in this work, we introduce a novel DRL-based preemptive method for further improve the performance of the current studies. Specifically, we try to improve the training of scheduling policy with effective job preemptive mechanisms, and on that basis to optimize job execution cost while meeting users’ expected response time. We introduce the detailed design of our method, and our evaluations demonstrate that our approach can achieve better performance than other scheduling algorithms under different real-time workloads, including the DRL approach.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"422-432"},"PeriodicalIF":3.9,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91223232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Owing to the exponential proliferation of internet services and the sophistication of intrusions, traditional intrusion detection algorithms are unable to handle complex invasions due to their limited representation capabilities and the unbalanced nature of Internet of Things (IoT)-related data in terms of both telemetry and network traffic. Drawing inspiration from deep learning achievements in feature extraction and representation learning, in this study, we propose an accurate hybrid ensemble deep learning framework (HEDLF) to protect against obfuscated cyber-attacks on IoT networks. To address complex features and alleviate the imbalance problem, the proposed HEDLF includes three key components: 1) a hierarchical feature representation technique based on deep learning, which aims to extract specific information by supervising the loss of gradient information; 2) a balanced rotated feature extractor that simultaneously encourages the individual accuracy and diversity of the ensemble classifier; and 3) a meta-classifier acting as an aggregation method, which leverages a semisparse group regularizer to analyze the base classifiers’ outputs. Additionally, these improvements take class imbalance into account. The experimental results show that when compared against state-of-the-art techniques in terms of accuracy, precision, recall, and F1-score, the proposed HEDLF can achieve promising results on both telemetry and network traffic data.
由于互联网服务的指数级激增和入侵的复杂性,传统的入侵检测算法由于其有限的表示能力和物联网(IoT)相关数据在遥测和网络流量方面的不均衡性而无法处理复杂的入侵。在本研究中,我们从深度学习在特征提取和表示学习方面的成就中汲取灵感,提出了一种精确的混合集合深度学习框架(HEDLF),以防范对物联网网络的模糊网络攻击。为解决复杂特征并缓解不平衡问题,所提出的 HEDLF 包括三个关键组件:1)基于深度学习的分层特征表示技术,旨在通过监督梯度信息的损失来提取特定信息;2)平衡旋转特征提取器,同时鼓励集合分类器的个体准确性和多样性;3)作为聚合方法的元分类器,利用半解析组正则化器来分析基础分类器的输出。此外,这些改进还考虑到了类的不平衡性。实验结果表明,在准确度、精确度、召回率和 F1 分数方面,与最先进的技术相比,所提出的 HEDLF 可以在遥测数据和网络流量数据上取得令人满意的结果。
{"title":"An Intrusion Detection and Identification System for Internet of Things Networks Using a Hybrid Ensemble Deep Learning Framework","authors":"Yanika Kongsorot;Pakarat Musikawan;Phet Aimtongkham;Ilsun You;Abderrahim Benslimane;Chakchai So-In","doi":"10.1109/TSUSC.2023.3303422","DOIUrl":"10.1109/TSUSC.2023.3303422","url":null,"abstract":"Owing to the exponential proliferation of internet services and the sophistication of intrusions, traditional intrusion detection algorithms are unable to handle complex invasions due to their limited representation capabilities and the unbalanced nature of Internet of Things (IoT)-related data in terms of both telemetry and network traffic. Drawing inspiration from deep learning achievements in feature extraction and representation learning, in this study, we propose an accurate hybrid ensemble deep learning framework (HEDLF) to protect against obfuscated cyber-attacks on IoT networks. To address complex features and alleviate the imbalance problem, the proposed HEDLF includes three key components: 1) a hierarchical feature representation technique based on deep learning, which aims to extract specific information by supervising the loss of gradient information; 2) a balanced rotated feature extractor that simultaneously encourages the individual accuracy and diversity of the ensemble classifier; and 3) a meta-classifier acting as an aggregation method, which leverages a semisparse group regularizer to analyze the base classifiers’ outputs. Additionally, these improvements take class imbalance into account. The experimental results show that when compared against state-of-the-art techniques in terms of accuracy, precision, recall, and F1-score, the proposed HEDLF can achieve promising results on both telemetry and network traffic data.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"8 4","pages":"596-613"},"PeriodicalIF":3.9,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77175733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-08DOI: 10.1109/TSUSC.2023.3303180
Tao Zhang
Privacy is important to financial industry, so as to blockchain based cryptocurrencies. Bitcoin can provide only weak identity privacy. To overcome privacy challenges of Bitcoin, some privacy focused cryptocurrencies are proposed, such as Dash, Monero, Zcash, Grin and Verge. Private address, confidential transaction, and network anonymization service are adopted to improve privacy in these privacy focused cryptocurrencies. We propose four privacy metrics for blockchain based cryptocurrencies as identity anonymity, transaction confidentiality, transaction unlinkability, and network anonymity. Then make a comparative analysis on privacy of Bitcoin, Dash, Monero, Verge, Zcash, and Grin from these privacy metrics. Finally, open challenges and future directions on blockchain based privacy cryptocurrencies are discussed. In the future, multi-level privacy enhancement schemes can be combined in privacy cryptocurrencies to improve privacy, performance and scalability.
{"title":"Privacy Evaluation of Blockchain Based Privacy Cryptocurrencies: A Comparative Analysis of Dash, Monero, Verge, Zcash, and Grin","authors":"Tao Zhang","doi":"10.1109/TSUSC.2023.3303180","DOIUrl":"10.1109/TSUSC.2023.3303180","url":null,"abstract":"Privacy is important to financial industry, so as to blockchain based cryptocurrencies. Bitcoin can provide only weak identity privacy. To overcome privacy challenges of Bitcoin, some privacy focused cryptocurrencies are proposed, such as Dash, Monero, Zcash, Grin and Verge. Private address, confidential transaction, and network anonymization service are adopted to improve privacy in these privacy focused cryptocurrencies. We propose four privacy metrics for blockchain based cryptocurrencies as identity anonymity, transaction confidentiality, transaction unlinkability, and network anonymity. Then make a comparative analysis on privacy of Bitcoin, Dash, Monero, Verge, Zcash, and Grin from these privacy metrics. Finally, open challenges and future directions on blockchain based privacy cryptocurrencies are discussed. In the future, multi-level privacy enhancement schemes can be combined in privacy cryptocurrencies to improve privacy, performance and scalability.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"8 4","pages":"574-582"},"PeriodicalIF":3.9,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81786582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-08DOI: 10.1109/TSUSC.2023.3303270
Emanuele Lattanzi;Chiara Contoli;Valerio Freschi
The design of IoT systems supporting deep learning capabilities is mainly based today on data transmission to the cloud back-end. Recently, edge computing solutions, which keep most computing and communication as close as possible to user devices have emerged as possible alternatives to reduce energy consumption, limit latency, and safeguard privacy. Early-exit models have been proposed as a way to combine models with different depths into a single architecture. The aim of this article is to investigate the energy expenditure of a distributed IoT system based on early exit architectures, by taking human activity recognition as a case study. We propose a simulation study based on an analytical model and hardware characterization to estimate the trade-off between the accuracy and energy of early exit-based configurations. Experimental results highlight nontrivial relationships between architectures, computing platforms, and communication link. For instance, we found that early-exit strategies do not ensure energy reductions with respect to a cloud-based solution if the same accuracy levels are kept; nonetheless, by tolerating a 1.5% decrease in accuracy, it is possible to achieve a reduction of around 40% of the total energy consumption.
{"title":"A Study on the Energy Sustainability of Early Exit Networks for Human Activity Recognition","authors":"Emanuele Lattanzi;Chiara Contoli;Valerio Freschi","doi":"10.1109/TSUSC.2023.3303270","DOIUrl":"10.1109/TSUSC.2023.3303270","url":null,"abstract":"The design of IoT systems supporting deep learning capabilities is mainly based today on data transmission to the cloud back-end. Recently, edge computing solutions, which keep most computing and communication as close as possible to user devices have emerged as possible alternatives to reduce energy consumption, limit latency, and safeguard privacy. Early-exit models have been proposed as a way to combine models with different depths into a single architecture. The aim of this article is to investigate the energy expenditure of a distributed IoT system based on early exit architectures, by taking human activity recognition as a case study. We propose a simulation study based on an analytical model and hardware characterization to estimate the trade-off between the accuracy and energy of early exit-based configurations. Experimental results highlight nontrivial relationships between architectures, computing platforms, and communication link. For instance, we found that early-exit strategies do not ensure energy reductions with respect to a cloud-based solution if the same accuracy levels are kept; nonetheless, by tolerating a 1.5% decrease in accuracy, it is possible to achieve a reduction of around 40% of the total energy consumption.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 1","pages":"61-74"},"PeriodicalIF":3.9,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89794687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A wide variety of Mobile Devices (MDs) are adopted in Internet of Things (IoT) environments, resulting in a dramatic increase in the volume of task data and greenhouse gas emissions. However, due to the limited battery power and computing resources of MD, it is critical to process more data with less energy. This article studies the Wireless Power Transfer-based Mobile Edge Computing (WPT-MEC) network system assisted by Intelligent Reflective Surface (IRS) to enhance communication performance while improving the battery life of MD. In order to maximize the Computation Energy Efficiency (CEE) of the system and reduce the carbon footprint of the MEC server, we jointly optimize the CPU frequencies of MDs and MEC server, the transmit power of Power Beacon (PB), the processing time of MEC server, the offloading time and the energy harvesting time of MDs, the local processing time and the offloading power of MD and the phase shift coefficient matrix of Intelligent Reflecting Surface (IRS). Moreover, we transform this joint optimization problem into a fractional programming problem. We then propose the Dinkelbach Iterative Algorithm with Gradient Updates (DIA-GU) to solve this problem effectively. With the help of convex optimization theory, we can obtain closed-form solutions, revealing the correlation between different variables. Compared to other algorithms, the DIA-GU algorithm not only exhibits superior performance in enhancing the system's CEE but also demonstrates significant reductions in carbon emissions.
{"title":"Computation Energy Efficiency Maximization for Intelligent Reflective Surface-Aided Wireless Powered Mobile Edge Computing","authors":"Junhui Du;Minxian Xu;Sukhpal Singh Gill;Huaming Wu","doi":"10.1109/TSUSC.2023.3298822","DOIUrl":"10.1109/TSUSC.2023.3298822","url":null,"abstract":"A wide variety of Mobile Devices (MDs) are adopted in Internet of Things (IoT) environments, resulting in a dramatic increase in the volume of task data and greenhouse gas emissions. However, due to the limited battery power and computing resources of MD, it is critical to process more data with less energy. This article studies the Wireless Power Transfer-based Mobile Edge Computing (WPT-MEC) network system assisted by Intelligent Reflective Surface (IRS) to enhance communication performance while improving the battery life of MD. In order to maximize the Computation Energy Efficiency (CEE) of the system and reduce the carbon footprint of the MEC server, we jointly optimize the CPU frequencies of MDs and MEC server, the transmit power of Power Beacon (PB), the processing time of MEC server, the offloading time and the energy harvesting time of MDs, the local processing time and the offloading power of MD and the phase shift coefficient matrix of Intelligent Reflecting Surface (IRS). Moreover, we transform this joint optimization problem into a fractional programming problem. We then propose the Dinkelbach Iterative Algorithm with Gradient Updates (DIA-GU) to solve this problem effectively. With the help of convex optimization theory, we can obtain closed-form solutions, revealing the correlation between different variables. Compared to other algorithms, the DIA-GU algorithm not only exhibits superior performance in enhancing the system's CEE but also demonstrates significant reductions in carbon emissions.","PeriodicalId":13268,"journal":{"name":"IEEE Transactions on Sustainable Computing","volume":"9 3","pages":"371-385"},"PeriodicalIF":3.9,"publicationDate":"2023-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78245071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Power Transfer (WPT) is a promising technology that can potentially mitigate the energy provisioning problem for sensor networks. In order to efficiently replenish energy for these battery-powered devices, designing appropriate scheduling and charging path planning algorithms is essential and challenging. Whilst previous studies have tackled this challenge, the conjoint influences of network topology, charging path planning, and energy threshold distribution in Wireless Rechargeable Sensor Networks (WRSNs) are still in their infancy. We mitigate the aforementioned problem by proposing novel algorithmic solutions to efficient sector-based on-demand charging scheduling and path planning. Specifically, we first propose a hexagonal cluster-based deployment of nodes such that finding an NP-Complete Hamiltonian path is feasible. Second, each cluster is divided into multiple sectors and a charging path planning algorithm is implemented to yield a Hamiltonian path, aimed at improving the Mobile Charging Vehicle (MCV) efficiency and charging throughput. Third, we propose an efficient algorithm to calculate the importance