Pub Date : 2025-10-04DOI: 10.1016/j.suscom.2025.101217
Yi-Wen Zhang, Jin-Peng Ma
With the exponential growth of power density in modern high-performance processors, it has not only led to significant energy but also resulted in increased chip temperatures. Therefore, reducing energy consumption and temperature have become two important issues in mixed-criticality systems (MCS) design. This paper focused on semi-clairvoyant scheduling in MCS with multiprocessor platforms. In semi-clairvoyant scheduling, high-criticality jobs are aware of whether their execution time will surpass their Worst-Case Execution Time in the low-criticality mode upon their arrival. Firstly, we give temperature constraints for the MCS task set based on steady-state thermal analysis. Secondly, we propose a new thermal-aware partitioned semi-clairvoyant scheduling algorithm called (TAPMC), aiming to minimize the normalized energy consumption under threshold temperature constraints. Finally, we evaluated TAPMC experimentally compared to other benchmark algorithms, and the experimental results illustrate that the TAPMC algorithm surpasses other algorithms in normalized energy consumption.
{"title":"Partitioned scheduling in mixed-criticality systems with thermal-constrained and semi-clairvoyance","authors":"Yi-Wen Zhang, Jin-Peng Ma","doi":"10.1016/j.suscom.2025.101217","DOIUrl":"10.1016/j.suscom.2025.101217","url":null,"abstract":"<div><div>With the exponential growth of power density in modern high-performance processors, it has not only led to significant energy but also resulted in increased chip temperatures. Therefore, reducing energy consumption and temperature have become two important issues in mixed-criticality systems (MCS) design. This paper focused on semi-clairvoyant scheduling in MCS with multiprocessor platforms. In semi-clairvoyant scheduling, high-criticality jobs are aware of whether their execution time will surpass their Worst-Case Execution Time in the low-criticality mode upon their arrival. Firstly, we give temperature constraints for the MCS task set based on steady-state thermal analysis. Secondly, we propose a new thermal-aware partitioned semi-clairvoyant scheduling algorithm called (TAPMC), aiming to minimize the normalized energy consumption under threshold temperature constraints. Finally, we evaluated TAPMC experimentally compared to other benchmark algorithms, and the experimental results illustrate that the TAPMC algorithm surpasses other algorithms in normalized energy consumption.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101217"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-04DOI: 10.1016/j.suscom.2025.101219
Shilpa Ghode , Mayuri Digalwar
Energy Management in Hybrid Electric Vehicles (EMinHEVs) refers to optimizing energy flow within a vehicle’s powertrain to enhance efficiency and range. This process involves complex tasks such as power analysis, component characterization, and hyperparameter reconfiguration, which directly impact the performance of energy management algorithms. However, existing optimization models struggle with scalability and inter-component correlations, limiting their effectiveness. This paper introduces a novel model-based hybrid framework combining Deep Dyna Reinforcement Learning (D2RL) with Genetic Optimization to address these challenges. Unlike conventional model-free approaches, the D2RL leverages a learned internal model to simulate future states, enabling more efficient decision-making and parameter tuning. The framework dynamically refines critical engine parameters — including speed, power, and torque — for both the generator and motor. Initially, D2RL estimates optimal parameter sets, which are then fine-tuned using a Genetic Optimizer. This optimizer incorporates an augmented reward function to iteratively enhance energy efficiency and vehicle performance. The proposed method outperforms state-of-the-art techniques, including Optimal Logical Control, Adaptive Equivalent Consumption Minimization Strategy, and Learnable Partheno-Genetic Algorithm. Experimental results demonstrate a 3.5% reduction in engine costs, an 8.3% improvement in fuel efficiency, optimized torque characteristics, and minimized current requirements. These findings establish our approach as a scalable and effective solution for intelligent energy management in hybrid electric vehicles, offering a significant advancement in model-based optimization strategies.
{"title":"Intelligent reinforcement learning for enhanced energy efficiency in hybrid electric vehicles","authors":"Shilpa Ghode , Mayuri Digalwar","doi":"10.1016/j.suscom.2025.101219","DOIUrl":"10.1016/j.suscom.2025.101219","url":null,"abstract":"<div><div>Energy Management in Hybrid Electric Vehicles (EMinHEVs) refers to optimizing energy flow within a vehicle’s powertrain to enhance efficiency and range. This process involves complex tasks such as power analysis, component characterization, and hyperparameter reconfiguration, which directly impact the performance of energy management algorithms. However, existing optimization models struggle with scalability and inter-component correlations, limiting their effectiveness. This paper introduces a novel model-based hybrid framework combining Deep Dyna Reinforcement Learning (D2RL) with Genetic Optimization to address these challenges. Unlike conventional model-free approaches, the D2RL leverages a learned internal model to simulate future states, enabling more efficient decision-making and parameter tuning. The framework dynamically refines critical engine parameters — including speed, power, and torque — for both the generator and motor. Initially, D2RL estimates optimal parameter sets, which are then fine-tuned using a Genetic Optimizer. This optimizer incorporates an augmented reward function to iteratively enhance energy efficiency and vehicle performance. The proposed method outperforms state-of-the-art techniques, including Optimal Logical Control, Adaptive Equivalent Consumption Minimization Strategy, and Learnable Partheno-Genetic Algorithm. Experimental results demonstrate a 3.5% reduction in engine costs, an 8.3% improvement in fuel efficiency, optimized torque characteristics, and minimized current requirements. These findings establish our approach as a scalable and effective solution for intelligent energy management in hybrid electric vehicles, offering a significant advancement in model-based optimization strategies.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101219"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-04DOI: 10.1016/j.suscom.2025.101220
Yuan Yao, Bin Zhu, Yang Xiao, Hao Liu
Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.
{"title":"Trade-offs between power consumption and response time in deep learning systems: A queueing model perspective","authors":"Yuan Yao, Bin Zhu, Yang Xiao, Hao Liu","doi":"10.1016/j.suscom.2025.101220","DOIUrl":"10.1016/j.suscom.2025.101220","url":null,"abstract":"<div><div>Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101220"},"PeriodicalIF":5.7,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1016/j.suscom.2025.101221
Mahmoud Elsisi , Mohammed Amer , Mahmoud N. Ali , Chun-Lien Su
The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.
{"title":"A resilient IoT-enabled framework using hybrid decision tree and wavelet transform for secure and sustainable photovoltaic energy management","authors":"Mahmoud Elsisi , Mohammed Amer , Mahmoud N. Ali , Chun-Lien Su","doi":"10.1016/j.suscom.2025.101221","DOIUrl":"10.1016/j.suscom.2025.101221","url":null,"abstract":"<div><div>The increasing integration of photovoltaic (PV) systems into smart grids necessitates resilient and secure monitoring frameworks to mitigate the impact of cyber threats such as false data injection (FDI) attacks. This study presents an Internet of Things (IoT)-enabled architecture that leverages a hybrid decision tree model combined with continuous wavelet transform (DT-CWT) for real-time anomaly detection and performance monitoring in PV systems. The CWT is used for time-frequency decomposition and feeding the extracted scalograms into a lightweight DT model. Designed with computational efficiency and low memory overhead, the proposed framework is optimized for deployment in resource-constrained edge environments. Experimental results demonstrate that the DT-CWT-based hybrid model significantly enhances detection accuracy by 97.89 % with a processing latency of 1.32 ms on edge devices and operational resilience, outperforming traditional machine learning baselines (e.g., Linear Discriminant Analysis (LDA), Gaussian Naïve Bayes (GNB), Support Vector Classifier (SVC), and Random Forest (RF), and DT) under adversarial conditions. This approach ensures data integrity, strengthens cybersecurity, and supports intelligent energy management, contributing to the realization of resilient and sustainable power grids aligned with Industry 4.0 and global sustainability goals.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101221"},"PeriodicalIF":5.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30DOI: 10.1016/j.suscom.2025.101218
Mehdi Hosseinzadeh , Amir Haider , Amir Masoud Rahmani , Farhad Soleimanian Gharehchopogh , Shakiba Rajabi , Parisa Khoshvaght , Thantrira Porntaveetus , Sang-Woong Lee
The rapid growth of Internet of Things (IoT) devices presents significant challenges, particularly regarding resource management in real-time data processing environments. Traditional cloud computing struggles with high delay times and limited bandwidth, affecting user interaction and cognitive load. Edge computing mitigates these issues by decentralizing data processing and bringing resources closer to IoT devices, ultimately influencing human-computer interaction. This paper introduces a framework for resource allocation in edge computing environments, leveraging Software-Defined Networking (SDN) and Network Function Virtualization (NFV) alongside Deep Q-Network (DQN) optimization. The framework aims to enhance user experiences by improving CPU, memory, and storage efficiency while reducing network delays, contributing to a smoother and more efficient interaction with IoT systems. Simulated results demonstrate a 40 % improvement in CPU utilization, 30 % in memory, and 20 % in storage efficiency, which can positively impact IoT devices' perceived effectiveness and usability.
{"title":"SDN-Based NFV deployment for multi-objective resource allocation in edge computing: A deep reinforcement learning for iot workload scheduling","authors":"Mehdi Hosseinzadeh , Amir Haider , Amir Masoud Rahmani , Farhad Soleimanian Gharehchopogh , Shakiba Rajabi , Parisa Khoshvaght , Thantrira Porntaveetus , Sang-Woong Lee","doi":"10.1016/j.suscom.2025.101218","DOIUrl":"10.1016/j.suscom.2025.101218","url":null,"abstract":"<div><div>The rapid growth of Internet of Things (IoT) devices presents significant challenges, particularly regarding resource management in real-time data processing environments. Traditional cloud computing struggles with high delay times and limited bandwidth, affecting user interaction and cognitive load. Edge computing mitigates these issues by decentralizing data processing and bringing resources closer to IoT devices, ultimately influencing human-computer interaction. This paper introduces a framework for resource allocation in edge computing environments, leveraging Software-Defined Networking (SDN) and Network Function Virtualization (NFV) alongside Deep Q-Network (DQN) optimization. The framework aims to enhance user experiences by improving CPU, memory, and storage efficiency while reducing network delays, contributing to a smoother and more efficient interaction with IoT systems. Simulated results demonstrate a 40 % improvement in CPU utilization, 30 % in memory, and 20 % in storage efficiency, which can positively impact IoT devices' perceived effectiveness and usability.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101218"},"PeriodicalIF":5.7,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-30DOI: 10.1016/j.suscom.2025.101215
Xiaochang Zheng , Ruixiang Guo , Shujing Lian
Consumers will have access to ubiquitous, low-latency computing services through the deployment of mobile edge computing (MEC) devices situated at the network's peripheral in next-generation wireless networks. Taking into account the design-based constraints on radio-access coverage and CS stability, we investigate the network's latency performance, namely the latency of computation and communication. Here, we want to model a spatial random network that has properties such as randomly dispersed nodes, parallel processing, non-orthogonal multiple access, and computing jobs that are produced at random. The emerging Internet of Things apps are putting a premium on very fast response times, and more and more people are turning to the edge computing system to handle these demands. Regardless, problems with latency (such as very sensitive delay required by emergent traffic). In this paper, we designed a Load Balanced Edge Computing (LBEC) model for Internet of Things (IoT). The overall contributions lies in three fold: First, the IoT devices are clustered based on load status in order to balance load in the network layer. For cluster formation, we presented K-Hop neighbor approach. In next, the cluster level load balancing is achieved by maintaining cluster reformation through Fuzzy Logic based Hidden Markov Model (FL-HMM). Finally, edge-level load balancing is attained through offloading procedure. We proposed Bobcat Optimization Algorithm (BOA). Final experimental results show that the proposed LBEC achieves better performance up to 5 % in each parameter such as response time, offloading time and throughput.
{"title":"Energy-efficient Load Balanced Edge Computing model for IoT using FL-HMM and BOA optimization","authors":"Xiaochang Zheng , Ruixiang Guo , Shujing Lian","doi":"10.1016/j.suscom.2025.101215","DOIUrl":"10.1016/j.suscom.2025.101215","url":null,"abstract":"<div><div>Consumers will have access to ubiquitous, low-latency computing services through the deployment of mobile edge computing (MEC) devices situated at the network's peripheral in next-generation wireless networks. Taking into account the design-based constraints on radio-access coverage and CS stability, we investigate the network's latency performance, namely the latency of computation and communication. Here, we want to model a spatial random network that has properties such as randomly dispersed nodes, parallel processing, non-orthogonal multiple access, and computing jobs that are produced at random. The emerging Internet of Things apps are putting a premium on very fast response times, and more and more people are turning to the edge computing system to handle these demands. Regardless, problems with latency (such as very sensitive delay required by emergent traffic). In this paper, we designed a Load Balanced Edge Computing (LBEC) model for Internet of Things (IoT). The overall contributions lies in three fold: First, the IoT devices are clustered based on load status in order to balance load in the network layer. For cluster formation, we presented K-Hop neighbor approach. In next, the cluster level load balancing is achieved by maintaining cluster reformation through Fuzzy Logic based Hidden Markov Model (FL-HMM). Finally, edge-level load balancing is attained through offloading procedure. We proposed Bobcat Optimization Algorithm (BOA). Final experimental results show that the proposed LBEC achieves better performance up to 5 % in each parameter such as response time, offloading time and throughput.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101215"},"PeriodicalIF":5.7,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet data centers (IDCs) are critical infrastructures supporting the digital economy, necessitating stable and resilient energy supply to ensure continuous operation and meet increasing computational demands. This study develops an advanced optimization framework. The framework improves IDC energy efficiency by leveraging their spatio-temporal flexibility for intelligent participation in power system operations. The proposed framework uses an energy portfolio comprising combined heat and power (CHP) units, fuel cells (FCs), locally controllable generators (LCGs), and renewable energy sources (RESs), to reduce reliance on the main grid while maintaining operational efficiency. To address supply/demand uncertainties, robust optimization (RO) is applied. Furthermore, extreme gradient boosting (XGBoost) is used for feature selection and engineering, identifying key parameters mostly effecting the IDCs behavior. These features are then fed into a Transformer-based machine learning (ML) model, which captures complex spatio-temporal dependencies and provides accurate forecasts. The predictions are then incorporated into the RO-based decision-making process to support real-time energy optimization. The proposed framework is validated on the IEEE 33-bus standard distribution network, simulating realistic IDC operation scenarios. Results show the higher performance of the proposed strategy, achieving at least 35.3 % improvement in mean absolute error (MAE), reduced to 16.22 kWh, and 16.7 % improvement in root mean square error (RMSE), reduced to 33.56 kWh, compared to conventional ML models. Additionally, the proposed model is evaluated by the other KPIs of root mean square relative error (RMSRE=0.35), mean square relative error (MSRE=0.12), mean absolute relative error (MARE=0.16), normalized RMSE (nRMSE=0.14), and normalized MAE (nMAE=0.08). These findings confirm the robustness and effectiveness of the proposed hybrid framework in enhancing IDC operational efficiency.
{"title":"A two-stage spatio-temporal flexibility-based energy optimization of internet data centers in active distribution networks based on robust control and transformer machine learning strategy","authors":"Ashkan Safari , Kamran Taghizad Tavana , Mehrdad Tarafdar Hagh , Ali Esmaeel Nezhad","doi":"10.1016/j.suscom.2025.101214","DOIUrl":"10.1016/j.suscom.2025.101214","url":null,"abstract":"<div><div>Internet data centers (IDCs) are critical infrastructures supporting the digital economy, necessitating stable and resilient energy supply to ensure continuous operation and meet increasing computational demands. This study develops an advanced optimization framework. The framework improves IDC energy efficiency by leveraging their spatio-temporal flexibility for intelligent participation in power system operations. The proposed framework uses an energy portfolio comprising combined heat and power (CHP) units, fuel cells (FCs), locally controllable generators (LCGs), and renewable energy sources (RESs), to reduce reliance on the main grid while maintaining operational efficiency. To address supply/demand uncertainties, robust optimization (RO) is applied. Furthermore, extreme gradient boosting (XGBoost) is used for feature selection and engineering, identifying key parameters mostly effecting the IDCs behavior. These features are then fed into a Transformer-based machine learning (ML) model, which captures complex spatio-temporal dependencies and provides accurate forecasts. The predictions are then incorporated into the RO-based decision-making process to support real-time energy optimization. The proposed framework is validated on the IEEE 33-bus standard distribution network, simulating realistic IDC operation scenarios. Results show the higher performance of the proposed strategy, achieving at least 35.3 % improvement in mean absolute error (MAE), reduced to 16.22 kWh, and 16.7 % improvement in root mean square error (RMSE), reduced to 33.56 kWh, compared to conventional ML models. Additionally, the proposed model is evaluated by the other KPIs of root mean square relative error (RMSRE=0.35), mean square relative error (MSRE=0.12), mean absolute relative error (MARE=0.16), normalized RMSE (nRMSE=0.14), and normalized MAE (nMAE=0.08). These findings confirm the robustness and effectiveness of the proposed hybrid framework in enhancing IDC operational efficiency.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101214"},"PeriodicalIF":5.7,"publicationDate":"2025-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145220314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1016/j.suscom.2025.101213
Xuecheng Wu, Qiongbing Xiong, Cizhen Yu
The evolving energy landscape is increasingly integrating diverse energy sources, electricity, gas, heat, and cooling, reflecting a strategic shift driven by smart technologies and rising renewable adoption. However, the variability of renewable supply requires enhanced flexibility in demand-side management. This study presents a novel approach to optimizing regional integrated energy systems through a two-layer closed-loop model that incorporates exergy efficiency and user satisfaction dynamics. The model addresses the limitations of traditional energy systems, which often operate within the constraints of singular energy resources and fail to fully integrate renewable energies. The proposed model optimizes energy production, conversion, transmission, and consumption by using a multi-objective framework that includes economic, environmental, and exergy efficiency considerations. The proposed optimization approach significantly improves the performance of integrated energy systems. The energy efficiency is enhanced by 8.36 %, while exergy efficiency shows a notable increase of 1.61 %. Emissions are reduced by approximately 16.3 %, demonstrating the environmental benefits of the model. Though operational costs rise slightly, the trade-off favors sustainability with substantial gains in energy and environmental outcomes. The modified Multi-Objective Particle Swarm Optimization (MOPSO) algorithm outperforms traditional methods like NSGA-II and Standard PSO, achieving a higher Hypervolume value, indicating better convergence and solution diversity. This makes MOPSO a robust tool for solving multi-objective optimization problems in energy management.
{"title":"Multi-objective optimization of regional energy systems with exergy efficiency and user satisfaction dynamics","authors":"Xuecheng Wu, Qiongbing Xiong, Cizhen Yu","doi":"10.1016/j.suscom.2025.101213","DOIUrl":"10.1016/j.suscom.2025.101213","url":null,"abstract":"<div><div>The evolving energy landscape is increasingly integrating diverse energy sources, electricity, gas, heat, and cooling, reflecting a strategic shift driven by smart technologies and rising renewable adoption. However, the variability of renewable supply requires enhanced flexibility in demand-side management. This study presents a novel approach to optimizing regional integrated energy systems through a two-layer closed-loop model that incorporates exergy efficiency and user satisfaction dynamics. The model addresses the limitations of traditional energy systems, which often operate within the constraints of singular energy resources and fail to fully integrate renewable energies. The proposed model optimizes energy production, conversion, transmission, and consumption by using a multi-objective framework that includes economic, environmental, and exergy efficiency considerations. The proposed optimization approach significantly improves the performance of integrated energy systems. The energy efficiency is enhanced by 8.36 %, while exergy efficiency shows a notable increase of 1.61 %. Emissions are reduced by approximately 16.3 %, demonstrating the environmental benefits of the model. Though operational costs rise slightly, the trade-off favors sustainability with substantial gains in energy and environmental outcomes. The modified Multi-Objective Particle Swarm Optimization (MOPSO) algorithm outperforms traditional methods like NSGA-II and Standard PSO, achieving a higher Hypervolume value, indicating better convergence and solution diversity. This makes MOPSO a robust tool for solving multi-objective optimization problems in energy management.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101213"},"PeriodicalIF":5.7,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1016/j.suscom.2025.101207
Ercan Erkalkan
This study addresses renewable-energy storage scheduling — a high-dimensional, multimodal optimization task — by proposing an enhanced Grey Wolf–Whale Optimization Algorithm (EGW–WOA). The method fuses GWO’s hierarchical leadership with WOA’s spiral exploitation and augments them with Lévy flights and progress-triggered chaotic re-initialization. Across 100 Monte-Carlo trials, EGW–WOAreduced 24 h operating cost to , improving over WOA by 16.62%, GA by 10.15%, FPA by 63.6%, and HS by 80.76%, with a 100% feasibility rate. It achieved the lowest dispersion (Std ; Max–Min spread ), shaved peak-demand charges by 9%, and limited depth-of-discharge swings to %, projecting a 12%–18% life extension. A 50-iteration run completed in 38.6 s on a 3.4 GHz CPU — over faster than a comparable MILP baseline — demonstrating suitability for near-real-time PV–wind microgrid control. Within the scope of Sustainable Computing: Informatics and Systems, this work delivers a reproducible, open-source optimization engine with non-parametric statistical validation and edge-suitable runtimes, linking algorithmic advances to system-level sustainability metrics (LCOS, demand charges). The results show how algorithm–system co-design can lower operating cost and risk while preserving battery health in cyber–physical energy systems.
{"title":"An enhanced hybrid optimization model for renewable energy storage: Integrating GWO and WOA, with Lévy mechanisms","authors":"Ercan Erkalkan","doi":"10.1016/j.suscom.2025.101207","DOIUrl":"10.1016/j.suscom.2025.101207","url":null,"abstract":"<div><div>This study addresses renewable-energy storage scheduling — a high-dimensional, multimodal optimization task — by proposing an enhanced Grey Wolf–Whale Optimization Algorithm (EGW–WOA). The method fuses GWO’s hierarchical leadership with WOA’s spiral exploitation and augments them with Lévy flights and progress-triggered chaotic re-initialization. Across 100 Monte-Carlo trials, EGW–WOAreduced 24<!--> <!-->h operating cost to <span><math><mrow><mn>2</mn><mo>.</mo><mn>94</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup><mo>±</mo><mn>7</mn><mo>.</mo><mn>97</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span>, improving over WOA by 16.62%, GA by 10.15%, FPA by 63.6%, and HS by 80.76%, with a 100% feasibility rate. It achieved the lowest dispersion (Std <span><math><mrow><mo>=</mo><mn>7</mn><mo>.</mo><mn>97</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>4</mn></mrow></msup></mrow></math></span>; Max–Min spread <span><math><mrow><mo>=</mo><mn>3</mn><mo>.</mo><mn>82</mn><mo>×</mo><mn>1</mn><msup><mrow><mn>0</mn></mrow><mrow><mn>5</mn></mrow></msup></mrow></math></span>), shaved peak-demand charges by <span><math><mo>≈</mo></math></span>9%, and limited depth-of-discharge swings to <span><math><mrow><mo><</mo><mn>35</mn></mrow></math></span>%, projecting a 12%–18% life extension. A 50-iteration run completed in 38.6<!--> <!-->s on a 3.4<!--> <!-->GHz CPU — over <span><math><mrow><mn>20</mn><mo>×</mo></mrow></math></span> faster than a comparable MILP baseline — demonstrating suitability for near-real-time PV–wind microgrid control. Within the scope of <em>Sustainable Computing: Informatics and Systems</em>, this work delivers a reproducible, open-source optimization engine with non-parametric statistical validation and edge-suitable runtimes, linking algorithmic advances to system-level sustainability metrics (LCOS, demand charges). The results show how algorithm–system co-design can lower operating cost and risk while preserving battery health in cyber–physical energy systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101207"},"PeriodicalIF":5.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic computing resources are accessible through Cloud Computing (CC), which has gained popularity as a computing technology. Effective Task Scheduling (TS) is an essential aspect of CC, crucial in optimizing task distribution over available resources for high performance. Assigning tasks in cloud environments is a complex process influenced by multiple factors such as network bandwidth availability, makespan and cost considerations. This study proposes a Hash-based Message Authentication Code – Secure Hash Authentication 256 (HMAC-SHA256) and Advanced Encryption Standard (AES) to ensure enhanced security in the task scheduling process within the CC environment. The HMAC-SHA256 algorithm is utilized for key generation, providing integrity verification and data authentication. The AES algorithm is employed to encrypt task data, then the Levy Flight - Secretary Bird Optimization (LF-SBO) algorithm is implemented to schedule optimal tasks in the cloud. The proposed HMAC-SHA256 – AES and LF-SBO algorithms demand lower energy requirements of 121.6 J for 10 tasks, 180.48 J for 25 tasks, 310.21 J for 50 tasks, 400.15 J for 75 tasks, and 520.34 J for 100 tasks, outperforming existing Particle Swarm Optimization (PSO).
{"title":"Secured and effective task scheduling in cloud computing using Levy Flight - Secretary Bird Optimization and Hash-based Message Authentication Code – Secure Hash Authentication 256","authors":"Nida Kousar Gouse, Gopala Krishnan Chandra Sekaran","doi":"10.1016/j.suscom.2025.101211","DOIUrl":"10.1016/j.suscom.2025.101211","url":null,"abstract":"<div><div>Dynamic computing resources are accessible through Cloud Computing (CC), which has gained popularity as a computing technology. Effective Task Scheduling (TS) is an essential aspect of CC, crucial in optimizing task distribution over available resources for high performance. Assigning tasks in cloud environments is a complex process influenced by multiple factors such as network bandwidth availability, makespan and cost considerations. This study proposes a Hash-based Message Authentication Code – Secure Hash Authentication 256 (HMAC-SHA256) and Advanced Encryption Standard (AES) to ensure enhanced security in the task scheduling process within the CC environment. The HMAC-SHA256 algorithm is utilized for key generation, providing integrity verification and data authentication. The AES algorithm is employed to encrypt task data, then the Levy Flight - Secretary Bird Optimization (LF-SBO) algorithm is implemented to schedule optimal tasks in the cloud. The proposed HMAC-SHA256 – AES and LF-SBO algorithms demand lower energy requirements of 121.6 J for 10 tasks, 180.48 J for 25 tasks, 310.21 J for 50 tasks, 400.15 J for 75 tasks, and 520.34 J for 100 tasks, outperforming existing Particle Swarm Optimization (PSO).</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101211"},"PeriodicalIF":5.7,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145118045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}