Pub Date : 2025-10-15DOI: 10.1016/j.suscom.2025.101232
Arman Gheysari, Hamid R. Zarandi
Proof of Work (PoW) Blockchain networks face significant challenges in balancing security and performance. Various attacks, such as selfish mining and Eclipse attacks, pose serious threats to the sustainability of these networks. This paper presents an optimization method of configuring consensus algorithm and network parameters using a Genetic Algorithm (GA). Our goal is to enhance performance while maintaining security. We specifically target key parameters for optimization: block size, block interval, and block propagation mechanism. The goal is to minimize both median block propagation time and stale block rate, and it preserves the attack resilience of PoW blockchain networks. The presented work provides a systematic approach for configuring network of PoW blockchain parameters. It offers a solution to enhance performance without compromising security or increasing vulnerability to common attack vectors. To identify practical configurations, we employ a simulation-based method within a given network simulation environment. The approach is generally quite iterative, with GA selecting the best-performing solutions based on their fitness regarding propagation delays and attack vulnerabilities. As a result, the method achieves an overall enhancement in the performance of PoW blockchain networks without increasing security concerns.
{"title":"Security-aware optimization of PoW-based blockchain performance using a genetic algorithm approach","authors":"Arman Gheysari, Hamid R. Zarandi","doi":"10.1016/j.suscom.2025.101232","DOIUrl":"10.1016/j.suscom.2025.101232","url":null,"abstract":"<div><div>Proof of Work (PoW) Blockchain networks face significant challenges in balancing security and performance. Various attacks, such as selfish mining and Eclipse attacks, pose serious threats to the sustainability of these networks. This paper presents an optimization method of configuring consensus algorithm and network parameters using a Genetic Algorithm (GA). Our goal is to enhance performance while maintaining security. We specifically target key parameters for optimization: block size, block interval, and block propagation mechanism. The goal is to minimize both median block propagation time and stale block rate, and it preserves the attack resilience of PoW blockchain networks. The presented work provides a systematic approach for configuring network of PoW blockchain parameters. It offers a solution to enhance performance without compromising security or increasing vulnerability to common attack vectors. To identify practical configurations, we employ a simulation-based method within a given network simulation environment. The approach is generally quite iterative, with GA selecting the best-performing solutions based on their fitness regarding propagation delays and attack vulnerabilities. As a result, the method achieves an overall enhancement in the performance of PoW blockchain networks without increasing security concerns.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101232"},"PeriodicalIF":5.7,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1016/j.suscom.2025.101233
Pingliang Ding , Qijun Du
The growing complexity and volume of data in the Chinese critical care setting necessitates the need to predict mortalities with intelligent and scalable and explainable systems. Conventional approaches, like rule-based models and independent machine learning models, are largely ineffective at combining the multimodal characteristics of ICU data in Chinese hospitals, especially when only structured clinical variables or time-series vital data are considered. To resolve them, Medi CloudX presents a hybrid Deep Learning (DL) model based on TabNet when working with structured electronic health records (EHRs) and Informer when dealing with long-term time-series data on ICUs. This is a combination of the two which enables a higher accuracy of prediction by selecting interpretable features among structured data and extracting long-term dependencies in ICU signals. The Reptile Search Algorithm (RSA) search hyperparameter optimization improves the performance of models with minimum human intervention. MediCloudX on a dataset of Chinese ICU scored an accuracy of 98.0 %, sensitivity of 100, specificity of 96.0, and F1-score of 98.04, surpassing state-of-the-art models such as CatBoost (AUC = 0.889), and LSTM-augmented scoring systems (AUC ≈ 0.898). The cloud-native structure of MediCloudX guarantees scale elasticity, minimal inference latency, and safe data handling, which are suitable to real-time applications in the ICU in China. This smart and high-achieving system is explainable and efficient in resource utilization, and it has great prospects of implementation in intelligent hospitals.
{"title":"MediCloudX: A scalable and secure cloud-based big data analytics framework for smart healthcare applications","authors":"Pingliang Ding , Qijun Du","doi":"10.1016/j.suscom.2025.101233","DOIUrl":"10.1016/j.suscom.2025.101233","url":null,"abstract":"<div><div>The growing complexity and volume of data in the Chinese critical care setting necessitates the need to predict mortalities with intelligent and scalable and explainable systems. Conventional approaches, like rule-based models and independent machine learning models, are largely ineffective at combining the multimodal characteristics of ICU data in Chinese hospitals, especially when only structured clinical variables or time-series vital data are considered. To resolve them, Medi CloudX presents a hybrid Deep Learning (DL) model based on TabNet when working with structured electronic health records (EHRs) and Informer when dealing with long-term time-series data on ICUs. This is a combination of the two which enables a higher accuracy of prediction by selecting interpretable features among structured data and extracting long-term dependencies in ICU signals. The Reptile Search Algorithm (RSA) search hyperparameter optimization improves the performance of models with minimum human intervention. MediCloudX on a dataset of Chinese ICU scored an accuracy of 98.0 %, sensitivity of 100, specificity of 96.0, and F1-score of 98.04, surpassing state-of-the-art models such as CatBoost (AUC = 0.889), and LSTM-augmented scoring systems (AUC ≈ 0.898). The cloud-native structure of MediCloudX guarantees scale elasticity, minimal inference latency, and safe data handling, which are suitable to real-time applications in the ICU in China. This smart and high-achieving system is explainable and efficient in resource utilization, and it has great prospects of implementation in intelligent hospitals.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101233"},"PeriodicalIF":5.7,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing adoption of Internet of Things (IoT) systems demands secure, energy-efficient, and scalable solutions capable of supporting mission-critical operations. Traditional blockchain-based Distributed Ledger Technologies (DLTs), however, face limitations such as high energy consumption, poor scalability, and transaction fees, making them less ideal for IoT environments. This paper presents a structured review of IOTA’s Tangle, a lightweight, feeless, and scalable DLT designed specifically for decentralized IoT architectures. The study categorizes recent IOTA-based approaches into four key domains: security, privacy, scalability, and energy efficiency. The surveyed literature is systematically classified and analyzed, highlighting the core challenges addressed by each approach. Comparative evaluation reveals the strengths and limitations of current methods in meeting IoT requirements. The findings suggest that while IOTA offers several advantages over traditional blockchains, integrating hybrid and comprehensive solutions remains a promising direction for future research. The paper concludes by outlining open challenges and opportunities for advancing IOTA-enabled IoT systems.
{"title":"Toward a secure and scalable IoT: A survey of IOTA-based distributed ledger technologies","authors":"Tariq Alsboui , Hussain Al-Aqrabi , Ahmad Manasrah , Mahmoud Artemi","doi":"10.1016/j.suscom.2025.101225","DOIUrl":"10.1016/j.suscom.2025.101225","url":null,"abstract":"<div><div>The increasing adoption of Internet of Things (IoT) systems demands secure, energy-efficient, and scalable solutions capable of supporting mission-critical operations. Traditional blockchain-based Distributed Ledger Technologies (DLTs), however, face limitations such as high energy consumption, poor scalability, and transaction fees, making them less ideal for IoT environments. This paper presents a structured review of IOTA’s Tangle, a lightweight, feeless, and scalable DLT designed specifically for decentralized IoT architectures. The study categorizes recent IOTA-based approaches into four key domains: security, privacy, scalability, and energy efficiency. The surveyed literature is systematically classified and analyzed, highlighting the core challenges addressed by each approach. Comparative evaluation reveals the strengths and limitations of current methods in meeting IoT requirements. The findings suggest that while IOTA offers several advantages over traditional blockchains, integrating hybrid and comprehensive solutions remains a promising direction for future research. The paper concludes by outlining open challenges and opportunities for advancing IOTA-enabled IoT systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101225"},"PeriodicalIF":5.7,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-14DOI: 10.1016/j.suscom.2025.101231
K. Manojkumar , Alok Singh Sengar , A.P. Jyothi , Syed Mohd Faisal
Independent sensor nodes that collect environmental data for various uses make up Wireless Sensor Networks (WSNs). By combining clustering, optimized routing, and sink mobility, the Reflection Equivariant Quantum Neural Network using Star Fish Optimization Algorithm (REQNN-SFOA) framework improves performance and reduces energy consumption in WSNs. WSNs face two significant challenges: limited energy resources and frequent topology changes caused by mobile sinks. These issues disrupt routing and significantly reduce network longevity. Conventional protocols struggle with higher energy consumption and increased packet loss. To counteract these issues, Energy-Efficient Routing and Predictive Sink Mobility (EERPSM) is proposed for WSN. The framework first clusters sensor nodes using the Newton-Raphson-based Optimizer (NRBO). Then, the Addax Optimization Algorithm (AOA) selects the cluster heads, and the Billiards Inspired Optimization Algorithm (BIOA) determines the shortest, least energy-consuming path to the sink. Sink mobility is predicted based on a Reflection Equivariant Quantum Neural Network (REQNN). The Starfish Optimization Algorithm (SOA) is used to optimize the weight parameter. Simulation results indicate that the proposed framework achieves a reliability of more than 99.9 % and an efficiency of 99.78 %. These improvements enhance data delivery, reduce energy consumption, and extend network lifetime. The proposed approach effectively addresses clustering, optimized routing, and predictive mobility handling, resulting in a robust solution for instantaneous and energy-efficient communication in mobile WSNs.
{"title":"Energy-efficient routing and predictive sink mobility in mobile wireless sensor networks using reflection equivariant quantum neural network and star fish optimization algorithms","authors":"K. Manojkumar , Alok Singh Sengar , A.P. Jyothi , Syed Mohd Faisal","doi":"10.1016/j.suscom.2025.101231","DOIUrl":"10.1016/j.suscom.2025.101231","url":null,"abstract":"<div><div>Independent sensor nodes that collect environmental data for various uses make up Wireless Sensor Networks (WSNs). By combining clustering, optimized routing, and sink mobility, the Reflection Equivariant Quantum Neural Network using Star Fish Optimization Algorithm (REQNN-SFOA) framework improves performance and reduces energy consumption in WSNs. WSNs face two significant challenges: limited energy resources and frequent topology changes caused by mobile sinks. These issues disrupt routing and significantly reduce network longevity. Conventional protocols struggle with higher energy consumption and increased packet loss. To counteract these issues, Energy-Efficient Routing and Predictive Sink Mobility (EERPSM) is proposed for WSN. The framework first clusters sensor nodes using the Newton-Raphson-based Optimizer (NRBO). Then, the Addax Optimization Algorithm (AOA) selects the cluster heads, and the Billiards Inspired Optimization Algorithm (BIOA) determines the shortest, least energy-consuming path to the sink. Sink mobility is predicted based on a Reflection Equivariant Quantum Neural Network (REQNN). The Starfish Optimization Algorithm (SOA) is used to optimize the weight parameter. Simulation results indicate that the proposed framework achieves a reliability of more than 99.9 % and an efficiency of 99.78 %. These improvements enhance data delivery, reduce energy consumption, and extend network lifetime. The proposed approach effectively addresses clustering, optimized routing, and predictive mobility handling, resulting in a robust solution for instantaneous and energy-efficient communication in mobile WSNs.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101231"},"PeriodicalIF":5.7,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the ever-evolving landscape of smart grids, the importance of accurate real-time load forecasting cannot be overstated. This paradigm shifting study presents a revolutionary methodology of combined Bidirectional Long Short-Term Memory Networks (Bi-LSTM) and Gated Recurrent Unit (GRU) to capture complex temporal relationships typical for energy consumption data. The importance of this concept is based on its ability to improve the functioning of smart grids and, therefore, help utilities to make the correct choices. The proposed hybrid model attains, in average, an overall forecasting prediction accuracy of 95 %; this exceeds the state-of-art. This accomplishment brings into focus how accurate load forecasting is, in essence to the proper functioning of smart grid systems. The detailed calculation and overall evaluation based on the performance indicators such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) with the result of MAE= 1.8 %, RMSE= 2.1 %, and R-squared = 0.92 provide not only the proof of the effectiveness of the proposed approach but also the possible significance for improving the predictability and stability of the power grid. Beyond its significance for improving the accuracy of forecasts, this research establishes Bi-LSTM and GRU networks as central to the search for the most suitable approaches to energy management in the new era of the smart grid.
{"title":"Enhanced energy-efficient load prediction in smart grids using bidirectional LSTM and gated recurrent unit networks","authors":"Elango Kannan , Ramesh Jayaraman , Cherukupalli Kumar , Gandhi Raj Rajamani","doi":"10.1016/j.suscom.2025.101230","DOIUrl":"10.1016/j.suscom.2025.101230","url":null,"abstract":"<div><div>In the ever-evolving landscape of smart grids, the importance of accurate real-time load forecasting cannot be overstated. This paradigm shifting study presents a revolutionary methodology of combined Bidirectional Long Short-Term Memory Networks (Bi-LSTM) and Gated Recurrent Unit (GRU) to capture complex temporal relationships typical for energy consumption data. The importance of this concept is based on its ability to improve the functioning of smart grids and, therefore, help utilities to make the correct choices. The proposed hybrid model attains, in average, an overall forecasting prediction accuracy of 95 %; this exceeds the state-of-art. This accomplishment brings into focus how accurate load forecasting is, in essence to the proper functioning of smart grid systems. The detailed calculation and overall evaluation based on the performance indicators such as Mean Absolute Error (MAE), Root Mean Squared Error (RMSE) with the result of MAE= 1.8 %, RMSE= 2.1 %, and R-squared = 0.92 provide not only the proof of the effectiveness of the proposed approach but also the possible significance for improving the predictability and stability of the power grid. Beyond its significance for improving the accuracy of forecasts, this research establishes Bi-LSTM and GRU networks as central to the search for the most suitable approaches to energy management in the new era of the smart grid.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101230"},"PeriodicalIF":5.7,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145415917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.suscom.2025.101222
Lalit Agarwal , Bhavnesh Jaint , Anup K. Mandpura
The power grid is a critical infrastructure, relies on Supervisory Control and Data Acquisition (SCADA), a computer-based system for real-time monitoring and control of the grid. However, these systems are increasingly being targeted by cyberattackers, posing significant risks to grid stability and security. Existing security solutions focus on either attack detection by verifying their signatures or predicting their cascading failure to isolate the failed component from the rest of the working components. In the current paper, our objective is to detect new or existing attacks and predict their cascading failure. This research accomplish the objective by introducing a new multi-model framework that combines three models, XGBoost, Transformer, and Graph Neural Networks (GNNs), to identify both known and unknown cyberattacks with forecast their cascading impacts on power grid systems. The XGBoost model detects the known attack patterns, which includes Data Injection, Remote Tripping Command Injection, Relay Setting Change Attacks. The Transformer model identifies the deviations from established attack patterns, which result in the discovery of new threats. Our evaluation of grid infrastructure attacks utilizes a GNN-based cascading failure prediction model that represents the power grid as a graph to forecast failure propagation through interconnected nodes. Through rigorous testing using an real world dataset, our framework shows exceptional detection performance while maintaining effective generalization to new attacks and strong cascading failure prediction capabilities. The results showcase accuracy up to 98. 6% and a score of 0.98 F1 in multisource datasets, outperforming single-model baselines.
{"title":"Hybrid AI framework for detecting cyberattacks and predicting cascading failures in power systems","authors":"Lalit Agarwal , Bhavnesh Jaint , Anup K. Mandpura","doi":"10.1016/j.suscom.2025.101222","DOIUrl":"10.1016/j.suscom.2025.101222","url":null,"abstract":"<div><div>The power grid is a critical infrastructure, relies on Supervisory Control and Data Acquisition (SCADA), a computer-based system for real-time monitoring and control of the grid. However, these systems are increasingly being targeted by cyberattackers, posing significant risks to grid stability and security. Existing security solutions focus on either attack detection by verifying their signatures or predicting their cascading failure to isolate the failed component from the rest of the working components. In the current paper, our objective is to detect new or existing attacks and predict their cascading failure. This research accomplish the objective by introducing a new multi-model framework that combines three models, XGBoost, Transformer, and Graph Neural Networks (GNNs), to identify both known and unknown cyberattacks with forecast their cascading impacts on power grid systems. The XGBoost model detects the known attack patterns, which includes Data Injection, Remote Tripping Command Injection, Relay Setting Change Attacks. The Transformer model identifies the deviations from established attack patterns, which result in the discovery of new threats. Our evaluation of grid infrastructure attacks utilizes a GNN-based cascading failure prediction model that represents the power grid as a graph to forecast failure propagation through interconnected nodes. Through rigorous testing using an real world dataset, our framework shows exceptional detection performance while maintaining effective generalization to new attacks and strong cascading failure prediction capabilities. The results showcase accuracy up to 98. 6% and a score of 0.98 F1 in multisource datasets, outperforming single-model baselines.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101222"},"PeriodicalIF":5.7,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-10DOI: 10.1016/j.suscom.2025.101229
Nerea Benito , Jose Carlos Pérez-Martínez , Juan B. Roldán , Ángela Lao , Antonio Urbina , Lucía Serrano-Luján
Memristor technologies, pivotal in the evolution of energy-efficient digital devices, have the potential to revolutionize fields like non-volatile memories, hardware cryptography, neuromorphic computing and artificial intelligence acceleration. This study applies Life Cycle Assessment (LCA) methodology to analyse the environmental impact of five memristor designs, focusing on materials and manufacturing processes. The analysis adheres to ISO 14040–44 standards and employs the ReCiPe methodology to evaluate 18 environmental impact categories, emphasizing categories such as freshwater ecotoxicity and global warming potential. The results highlight significant variations in environmental impacts across the designs, largely attributed to differences in active layer materials and manufacturing processes. Molybdenum exhibits the highest impact, particularly in freshwater ecotoxicity, while SiO₂ demonstrates the lowest overall impact. Manufacturing processes like sputtering and photolithography carried out at laboratory scale contribute disproportionately to energy consumption and environmental damage, suggesting that upscaling production to industrial efficiencies is mandatory to mitigate these impacts. Furthermore, several materials required for memristor fabrication are listed as critical by the International Energy Agency (IEA), raising concerns about supply security, resource scarcity and environmental sustainability. This analysis serves as a foundational step for optimizing memristor technologies, balancing performance demands with environmental stewardship. To the best of our knowledge, this is the first comprehensive Life Cycle Assessment that compares multiple memristor architectures using real laboratory data and evaluates their environmental impacts. This work provides a methodological foundation for future sustainability assessments in the context of emerging memory technologies.
{"title":"Life cycle assessment of digital memories: The memristor’s environmental footprint","authors":"Nerea Benito , Jose Carlos Pérez-Martínez , Juan B. Roldán , Ángela Lao , Antonio Urbina , Lucía Serrano-Luján","doi":"10.1016/j.suscom.2025.101229","DOIUrl":"10.1016/j.suscom.2025.101229","url":null,"abstract":"<div><div>Memristor technologies, pivotal in the evolution of energy-efficient digital devices, have the potential to revolutionize fields like non-volatile memories, hardware cryptography, neuromorphic computing and artificial intelligence acceleration. This study applies Life Cycle Assessment (LCA) methodology to analyse the environmental impact of five memristor designs, focusing on materials and manufacturing processes. The analysis adheres to ISO 14040–44 standards and employs the ReCiPe methodology to evaluate 18 environmental impact categories, emphasizing categories such as freshwater ecotoxicity and global warming potential. The results highlight significant variations in environmental impacts across the designs, largely attributed to differences in active layer materials and manufacturing processes. Molybdenum exhibits the highest impact, particularly in freshwater ecotoxicity, while SiO₂ demonstrates the lowest overall impact. Manufacturing processes like sputtering and photolithography carried out at laboratory scale contribute disproportionately to energy consumption and environmental damage, suggesting that upscaling production to industrial efficiencies is mandatory to mitigate these impacts. Furthermore, several materials required for memristor fabrication are listed as critical by the International Energy Agency (IEA), raising concerns about supply security, resource scarcity and environmental sustainability. This analysis serves as a foundational step for optimizing memristor technologies, balancing performance demands with environmental stewardship. To the best of our knowledge, this is the first comprehensive Life Cycle Assessment that compares multiple memristor architectures using real laboratory data and evaluates their environmental impacts. This work provides a methodological foundation for future sustainability assessments in the context of emerging memory technologies.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101229"},"PeriodicalIF":5.7,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1016/j.suscom.2025.101224
Jiaying Wang, Xiaoqian Meng, Xuan Yang, Haibing Yin, Pingkai Fang
The increasing complexity of modern power systems, arising from increased electricity demand and large-scale renewable energy resource integration, creates significant challenges for real-time scheduling and operational reliability. Conventional deterministic scheduling processes mostly cannot incorporate the inherent uncertainty and variability associated with the fluctuations of wind and solar generation, as well as fluctuating load demand, which leads to inefficiencies in the amount of necessary resources required and increased operational costs. This research proposes a novel multi-objective power scheduling framework that incorporates a Stochastic State-Space Model (SSSM) and Reinforcement Learning (RL) for dynamic management of generation, storage, and demand uncertainty. The SSSM takes into account the stochastic variability of renewable generation, uncertain demand profiles, and exogenous contingencies of the system. The RL agent is continuously learning the best scheduling strategies as it operates the power system to minimize operational costs while improving availability and maximizing scheduling performance. Simulation results using Monte Carlo testing over a 24-hour horizon demonstrated that the proposed method achieved a reduction of up to 20 % in operational costs, 10 % more system availability, and scheduling efficiencies of over 90 % compared to traditional methods. The proposed approach offers a feasible way forward for power systems operators to simultaneously meet the objectives of cost, reliability, and sustainability under a paradigm of uncertainty, while also having relevant application to real-time operation in smart grid systems, particularly in systems with high renewable energy.
{"title":"Multi-objective energy-efficient power system scheduling using Stochastic State Space Model and reinforcement learning","authors":"Jiaying Wang, Xiaoqian Meng, Xuan Yang, Haibing Yin, Pingkai Fang","doi":"10.1016/j.suscom.2025.101224","DOIUrl":"10.1016/j.suscom.2025.101224","url":null,"abstract":"<div><div>The increasing complexity of modern power systems, arising from increased electricity demand and large-scale renewable energy resource integration, creates significant challenges for real-time scheduling and operational reliability. Conventional deterministic scheduling processes mostly cannot incorporate the inherent uncertainty and variability associated with the fluctuations of wind and solar generation, as well as fluctuating load demand, which leads to inefficiencies in the amount of necessary resources required and increased operational costs. This research proposes a novel multi-objective power scheduling framework that incorporates a Stochastic State-Space Model (SSSM) and Reinforcement Learning (RL) for dynamic management of generation, storage, and demand uncertainty. The SSSM takes into account the stochastic variability of renewable generation, uncertain demand profiles, and exogenous contingencies of the system. The RL agent is continuously learning the best scheduling strategies as it operates the power system to minimize operational costs while improving availability and maximizing scheduling performance. Simulation results using Monte Carlo testing over a 24-hour horizon demonstrated that the proposed method achieved a reduction of up to 20 % in operational costs, 10 % more system availability, and scheduling efficiencies of over 90 % compared to traditional methods. The proposed approach offers a feasible way forward for power systems operators to simultaneously meet the objectives of cost, reliability, and sustainability under a paradigm of uncertainty, while also having relevant application to real-time operation in smart grid systems, particularly in systems with high renewable energy.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101224"},"PeriodicalIF":5.7,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-08DOI: 10.1016/j.suscom.2025.101228
R. Mahadevan , P. Karpagavalli
This study presents a novel control strategy that is based on a multilayer discrete noise-eliminating second-order generalized integrator (MDNSOGI). The objective of this technique is to make the power quality in smart grids more efficient by regulating a quasi-impedance source inverter (qZSI), which is coupled to a photovoltaic (PV) system and a Distribution Static Compensator (DSTATCOM). An imbalanced and distorted voltage situation, harmonic pollution, and system stability under nonlinear and variable load scenarios are some of the difficulties that are addressed by the system. For the purpose of achieving exact compensation, the suggested control strategy makes use of the MDNSOGI algorithm, which is capable of successfully extracting basic voltage components while simultaneously rejecting noise. Simulation findings in MATLAB/Simulink across a variety of case studies reveal a total harmonic distortion (THD) in grid currents that is less than 1.2 %, a decrease in voltage imbalance to less than 2 %, and an improvement in voltage stability. The system performs better than traditional approaches, such as the synchronous reference frame (SRF) and the traditional second-order generalized integrator (SOGI). This is shown by looking at other metrics like the voltage balancing index and how well it holds up under voltage sags. Through the elimination of derivative terms, this technique also helps to minimize the complexity of computing processes, which in turn supports energy-efficient and responsive power management. In light of these findings, the potential of the methodology that was introduced for the delivery of power in sophisticated grids that are coupled with sustainable electrical sources in a manner that is dependable, environmentally friendly, and of high quality has been brought to light.
{"title":"Sustainable grid-connected PV system with MDNSOGI-controlled qZSI-DSTATCOM for enhanced power quality","authors":"R. Mahadevan , P. Karpagavalli","doi":"10.1016/j.suscom.2025.101228","DOIUrl":"10.1016/j.suscom.2025.101228","url":null,"abstract":"<div><div>This study presents a novel control strategy that is based on a multilayer discrete noise-eliminating second-order generalized integrator (MDNSOGI). The objective of this technique is to make the power quality in smart grids more efficient by regulating a quasi-impedance source inverter (qZSI), which is coupled to a photovoltaic (PV) system and a Distribution Static Compensator (DSTATCOM). An imbalanced and distorted voltage situation, harmonic pollution, and system stability under nonlinear and variable load scenarios are some of the difficulties that are addressed by the system. For the purpose of achieving exact compensation, the suggested control strategy makes use of the MDNSOGI algorithm, which is capable of successfully extracting basic voltage components while simultaneously rejecting noise. Simulation findings in MATLAB/Simulink across a variety of case studies reveal a total harmonic distortion (THD) in grid currents that is less than 1.2 %, a decrease in voltage imbalance to less than 2 %, and an improvement in voltage stability. The system performs better than traditional approaches, such as the synchronous reference frame (SRF) and the traditional second-order generalized integrator (SOGI). This is shown by looking at other metrics like the voltage balancing index and how well it holds up under voltage sags. Through the elimination of derivative terms, this technique also helps to minimize the complexity of computing processes, which in turn supports energy-efficient and responsive power management. In light of these findings, the potential of the methodology that was introduced for the delivery of power in sophisticated grids that are coupled with sustainable electrical sources in a manner that is dependable, environmentally friendly, and of high quality has been brought to light.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101228"},"PeriodicalIF":5.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high energy consumption of cloud data centers (DCs) leads to a substantial carbon footprint. By reducing reliance on carbon-intensive fuels, renewable energy sources (RESs) such as wind power help mitigate greenhouse gas emissions. However, the inherent intermittency and fluctuation of RES generation, coupled with the stochastic nature of workload arrivals, complicate real-time scheduling and thereby significantly limit RES utilization efficiency in DCs. To address these issues, we propose a forecast-driven two-stage workload scheduling scheme that improves both scheduling efficiency and environmental sustainability. Specifically, we design a forecasting framework that integrates long short-term memory (LSTM) variants with a hierarchical decomposition using empirical mode decomposition (EMD) followed by variational mode decomposition (VMD). By precisely eliminating high-frequency noise and separately forecasting frequency components, the framework reduces noise interference and more accurately captures temporal patterns in workload and RES series. In the first stage, based on these forecasting results, effective global optimization is achieved in offline scheduling. In the second stage, scheduling results are dynamically adjusted based on real-time RES supply and workload demand to correct prediction errors. Experiments on real-world datasets validate the effectiveness of the proposed scheme. The forecasting models consistently outperform multiple baselines in prediction accuracy, achieving 3.41-69.46% reductions in mean absolute error compared to the state-of-the-art method. In addition, the proposed scheduling scheme increases RES utilization by 17.73–40.40% and achieves a corresponding 8.55-16.27 tons reduction in carbon emissions compared with the baselines. Furthermore, it shortens real-time scheduling latency by 81.3% relative to the real-time-only variant, underscoring its effectiveness in enabling sustainable and efficient DC operations.
{"title":"F2S-WSS: A forecast-driven two-stage workload scheduling scheme for carbon-aware geo-distributed data centers with wind power integration","authors":"Xueying Zhai , Guojun Zhu , Yunhao Zhang , Xiuping Guo , Yunfeng Peng","doi":"10.1016/j.suscom.2025.101216","DOIUrl":"10.1016/j.suscom.2025.101216","url":null,"abstract":"<div><div>The high energy consumption of cloud data centers (DCs) leads to a substantial carbon footprint. By reducing reliance on carbon-intensive fuels, renewable energy sources (RESs) such as wind power help mitigate greenhouse gas emissions. However, the inherent intermittency and fluctuation of RES generation, coupled with the stochastic nature of workload arrivals, complicate real-time scheduling and thereby significantly limit RES utilization efficiency in DCs. To address these issues, we propose a forecast-driven two-stage workload scheduling scheme that improves both scheduling efficiency and environmental sustainability. Specifically, we design a forecasting framework that integrates long short-term memory (LSTM) variants with a hierarchical decomposition using empirical mode decomposition (EMD) followed by variational mode decomposition (VMD). By precisely eliminating high-frequency noise and separately forecasting frequency components, the framework reduces noise interference and more accurately captures temporal patterns in workload and RES series. In the first stage, based on these forecasting results, effective global optimization is achieved in offline scheduling. In the second stage, scheduling results are dynamically adjusted based on real-time RES supply and workload demand to correct prediction errors. Experiments on real-world datasets validate the effectiveness of the proposed scheme. The forecasting models consistently outperform multiple baselines in prediction accuracy, achieving 3.41-69.46% reductions in mean absolute error compared to the state-of-the-art method. In addition, the proposed scheduling scheme increases RES utilization by 17.73–40.40% and achieves a corresponding 8.55-16.27 tons reduction in carbon emissions compared with the baselines. Furthermore, it shortens real-time scheduling latency by 81.3% relative to the real-time-only variant, underscoring its effectiveness in enabling sustainable and efficient DC operations.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101216"},"PeriodicalIF":5.7,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145267158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}