Pub Date : 2025-12-01Epub Date: 2025-10-04DOI: 10.1016/j.suscom.2025.101220
Yuan Yao, Bin Zhu, Yang Xiao, Hao Liu
Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.
{"title":"Trade-offs between power consumption and response time in deep learning systems: A queueing model perspective","authors":"Yuan Yao, Bin Zhu, Yang Xiao, Hao Liu","doi":"10.1016/j.suscom.2025.101220","DOIUrl":"10.1016/j.suscom.2025.101220","url":null,"abstract":"<div><div>Deep learning has revolutionized numerous fields, yet the computational resources required for training these models are substantial, leading to high energy consumption and associated costs. This paper explores the trade-off between energy usage and system performance, specifically focusing on the average waiting time of tasks in environments that manage multiple types of jobs with varying levels of priority. Recognizing that not all training tasks have the same urgency, we introduce a framework for optimizing GPU energy consumption by adjusting power limits based on job priority. Using matrix geometric approximations, we develop an algorithm to calculate the mean sojourn time and average power consumption for such systems. Through a series of experiments and simulations, we validate the model’s accuracy and demonstrate the existence of a power-performance trade-off. Our findings provide valuable guidance for practitioners seeking to balance the computational efficiency of deep learning workflows with the need for energy conservation, offering potential for both cost reduction and sustainability in large-scale AI systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101220"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-10DOI: 10.1016/j.suscom.2025.101229
Nerea Benito , Jose Carlos Pérez-Martínez , Juan B. Roldán , Ángela Lao , Antonio Urbina , Lucía Serrano-Luján
Memristor technologies, pivotal in the evolution of energy-efficient digital devices, have the potential to revolutionize fields like non-volatile memories, hardware cryptography, neuromorphic computing and artificial intelligence acceleration. This study applies Life Cycle Assessment (LCA) methodology to analyse the environmental impact of five memristor designs, focusing on materials and manufacturing processes. The analysis adheres to ISO 14040–44 standards and employs the ReCiPe methodology to evaluate 18 environmental impact categories, emphasizing categories such as freshwater ecotoxicity and global warming potential. The results highlight significant variations in environmental impacts across the designs, largely attributed to differences in active layer materials and manufacturing processes. Molybdenum exhibits the highest impact, particularly in freshwater ecotoxicity, while SiO₂ demonstrates the lowest overall impact. Manufacturing processes like sputtering and photolithography carried out at laboratory scale contribute disproportionately to energy consumption and environmental damage, suggesting that upscaling production to industrial efficiencies is mandatory to mitigate these impacts. Furthermore, several materials required for memristor fabrication are listed as critical by the International Energy Agency (IEA), raising concerns about supply security, resource scarcity and environmental sustainability. This analysis serves as a foundational step for optimizing memristor technologies, balancing performance demands with environmental stewardship. To the best of our knowledge, this is the first comprehensive Life Cycle Assessment that compares multiple memristor architectures using real laboratory data and evaluates their environmental impacts. This work provides a methodological foundation for future sustainability assessments in the context of emerging memory technologies.
{"title":"Life cycle assessment of digital memories: The memristor’s environmental footprint","authors":"Nerea Benito , Jose Carlos Pérez-Martínez , Juan B. Roldán , Ángela Lao , Antonio Urbina , Lucía Serrano-Luján","doi":"10.1016/j.suscom.2025.101229","DOIUrl":"10.1016/j.suscom.2025.101229","url":null,"abstract":"<div><div>Memristor technologies, pivotal in the evolution of energy-efficient digital devices, have the potential to revolutionize fields like non-volatile memories, hardware cryptography, neuromorphic computing and artificial intelligence acceleration. This study applies Life Cycle Assessment (LCA) methodology to analyse the environmental impact of five memristor designs, focusing on materials and manufacturing processes. The analysis adheres to ISO 14040–44 standards and employs the ReCiPe methodology to evaluate 18 environmental impact categories, emphasizing categories such as freshwater ecotoxicity and global warming potential. The results highlight significant variations in environmental impacts across the designs, largely attributed to differences in active layer materials and manufacturing processes. Molybdenum exhibits the highest impact, particularly in freshwater ecotoxicity, while SiO₂ demonstrates the lowest overall impact. Manufacturing processes like sputtering and photolithography carried out at laboratory scale contribute disproportionately to energy consumption and environmental damage, suggesting that upscaling production to industrial efficiencies is mandatory to mitigate these impacts. Furthermore, several materials required for memristor fabrication are listed as critical by the International Energy Agency (IEA), raising concerns about supply security, resource scarcity and environmental sustainability. This analysis serves as a foundational step for optimizing memristor technologies, balancing performance demands with environmental stewardship. To the best of our knowledge, this is the first comprehensive Life Cycle Assessment that compares multiple memristor architectures using real laboratory data and evaluates their environmental impacts. This work provides a methodological foundation for future sustainability assessments in the context of emerging memory technologies.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101229"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-10DOI: 10.1016/j.suscom.2025.101252
Mohammed Shuaib, Shadab Alam
The concept of decentralized energy trading is transforming the multiple ways of trading renewable energy, and the conventional method that requires aggregators is hindering speed and reliability. Therefore, we have suggested a decentralized IoT-blockchain architecture with Convolutional Neural Networks (CNN)-based fraud detection and K-Means cluster to match the prosumers and consumer. Our framework succeeds in the transaction in 93.9 % of cases compared to traditional aggregator-based trading platforms, which are characterized by a centralized system and delays in transactions, and achieve a higher fraud detection rate of 98.5. Also, it also improves energy distribution efficiency by 24.3 % and network resilience by 17.6 % and hence peer-to-peer markets can be made viable and secured. CNN model is used to identify anomalies (in real-time) as the clustering (best trade paths) is used to find the best trade paths based on demand profiles. To ensure the responsiveness, scalability, and security of the system, the simulations of trading and blockchain implementation scenarios were carried out in the MATLAB Simulink and Hyperledger Fabric. The current work has provided a more favorable platform to the decentralized paradigm of energy exchange by providing an intelligent, a faster and a safer model as compared to the traditional systems that were centralized around aggregators.
{"title":"A secure and energy-efficient IoT-blockchain framework for decentralized renewable energy trading","authors":"Mohammed Shuaib, Shadab Alam","doi":"10.1016/j.suscom.2025.101252","DOIUrl":"10.1016/j.suscom.2025.101252","url":null,"abstract":"<div><div>The concept of decentralized energy trading is transforming the multiple ways of trading renewable energy, and the conventional method that requires aggregators is hindering speed and reliability. Therefore, we have suggested a decentralized IoT-blockchain architecture with Convolutional Neural Networks (CNN)-based fraud detection and K-Means cluster to match the prosumers and consumer. Our framework succeeds in the transaction in 93.9 % of cases compared to traditional aggregator-based trading platforms, which are characterized by a centralized system and delays in transactions, and achieve a higher fraud detection rate of 98.5. Also, it also improves energy distribution efficiency by 24.3 % and network resilience by 17.6 % and hence peer-to-peer markets can be made viable and secured. CNN model is used to identify anomalies (in real-time) as the clustering (best trade paths) is used to find the best trade paths based on demand profiles. To ensure the responsiveness, scalability, and security of the system, the simulations of trading and blockchain implementation scenarios were carried out in the MATLAB Simulink and Hyperledger Fabric. The current work has provided a more favorable platform to the decentralized paradigm of energy exchange by providing an intelligent, a faster and a safer model as compared to the traditional systems that were centralized around aggregators.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101252"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-12DOI: 10.1016/j.suscom.2025.101238
Yan Lv , Sheng Liu , Li Li , Licheng Sha , Yadi Luo
This paper introduces a novel framework for modeling and optimizing integrated energy systems (IES) by combining an advanced energy hub model with a physics-inspired optimization algorithm. The energy hub model captures partial load characteristics and complex interactions among system components, representing each device as a node to enable detailed decomposition of energy flows across electricity, heat, and cooling carriers. Unlike conventional models that rely on fixed distribution factors, this approach uses load ratios and part-load-dependent efficiency functions as optimization variables, allowing for accurate representation of nonlinear efficiency variations and inter-node effects, such as cascading energy flows. Renewable energy sources are modeled as stochastic inputs, incorporating environmental uncertainties and device-specific characteristics to enhance simulation realism and reliability assessments. To optimize the IES, a modified charge system search algorithm is developed, integrating chaotic mapping for improved global exploration. The algorithm models solutions as charged particles interacting via electrostatic forces, guided by Newtonian mechanics, and dynamically adjusts coefficients to balance exploration and convergence. This physics-based approach improves adaptability and convergence efficiency compared to traditional evolutionary algorithms. The proposed framework offers a flexible and rigorous tool for designing, analyzing, and planning resilient, multi-energy systems under dynamic and uncertain conditions.
{"title":"Optimizing integrated energy systems: A two-layer framework for cost-effective and sustainable solutions","authors":"Yan Lv , Sheng Liu , Li Li , Licheng Sha , Yadi Luo","doi":"10.1016/j.suscom.2025.101238","DOIUrl":"10.1016/j.suscom.2025.101238","url":null,"abstract":"<div><div>This paper introduces a novel framework for modeling and optimizing integrated energy systems (IES) by combining an advanced energy hub model with a physics-inspired optimization algorithm. The energy hub model captures partial load characteristics and complex interactions among system components, representing each device as a node to enable detailed decomposition of energy flows across electricity, heat, and cooling carriers. Unlike conventional models that rely on fixed distribution factors, this approach uses load ratios and part-load-dependent efficiency functions as optimization variables, allowing for accurate representation of nonlinear efficiency variations and inter-node effects, such as cascading energy flows. Renewable energy sources are modeled as stochastic inputs, incorporating environmental uncertainties and device-specific characteristics to enhance simulation realism and reliability assessments. To optimize the IES, a modified charge system search algorithm is developed, integrating chaotic mapping for improved global exploration. The algorithm models solutions as charged particles interacting via electrostatic forces, guided by Newtonian mechanics, and dynamically adjusts coefficients to balance exploration and convergence. This physics-based approach improves adaptability and convergence efficiency compared to traditional evolutionary algorithms. The proposed framework offers a flexible and rigorous tool for designing, analyzing, and planning resilient, multi-energy systems under dynamic and uncertain conditions.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101238"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-11-25DOI: 10.1016/j.suscom.2025.101258
Ali M. Baydoun , Ahmed S. Zekri
The energy demand of datacenters has been rising steadily, making them major contributors to global electricity consumption and carbon emissions. This paper proposes HAPSO, a hybrid metaheuristic that integrates Ant Colony Optimization (ACO) for initial virtual machine (VM) placement with discretization-aware Particle Swarm Optimization (PSO) for migration optimization, tailored for energy- and carbon-efficient VM consolidation in green cloud datacenters. In the first stage, ACO performs energy-aware placement of VMs onto physical hosts, emphasizing global search to satisfy resource constraints and minimize power usage. In the second stage, discrete PSO refines the allocation by migrating VMs from overloaded and underutilized hosts, focusing on local optimization to improve consolidation and reduce resource wastage. The novel contributions include: sequential metaheuristic hybridization, a system-informed particle initialization (seeding PSO with ACO output to ensure feasible starting solutions) and a heuristic-guided discretization method (mapping continuous updates into valid VM–host assignments), and a multi-objective fitness function that minimizes active servers and unused capacity to enhance efficiency. We implement HAPSO in CloudSimPlus and evaluate it on workloads ranging from 500 to 14,000 VMs using realistic trace-driven simulations. Results show that HAPSO reduces energy consumption by 6.72 % on average (up to 10.0 %) and carbon emissions by 10.5 %, with savings peaking at 25.8 % in mid-scale workloads, compared to the ACO baseline, while maintaining SLA compliance. Statistical significance is confirmed via Friedman, Kendall’s W, and Wilcoxon signed-rank tests, with large effect sizes. These findings highlight HAPSO’s potential to support greener, sustainable cloud operations.
{"title":"HAPSO: An ACO-initialized, discretization-aware PSO for energy- and carbon-efficient VM consolidation in green cloud datacenters","authors":"Ali M. Baydoun , Ahmed S. Zekri","doi":"10.1016/j.suscom.2025.101258","DOIUrl":"10.1016/j.suscom.2025.101258","url":null,"abstract":"<div><div>The energy demand of datacenters has been rising steadily, making them major contributors to global electricity consumption and carbon emissions. This paper proposes HAPSO, a hybrid metaheuristic that integrates Ant Colony Optimization (ACO) for initial virtual machine (VM) placement with discretization-aware Particle Swarm Optimization (PSO) for migration optimization, tailored for energy- and carbon-efficient VM consolidation in green cloud datacenters. In the first stage, ACO performs energy-aware placement of VMs onto physical hosts, emphasizing global search to satisfy resource constraints and minimize power usage. In the second stage, discrete PSO refines the allocation by migrating VMs from overloaded and underutilized hosts, focusing on local optimization to improve consolidation and reduce resource wastage. The novel contributions include: sequential metaheuristic hybridization, a system-informed particle initialization (seeding PSO with ACO output to ensure feasible starting solutions) and a heuristic-guided discretization method (mapping continuous updates into valid VM–host assignments), and a multi-objective fitness function that minimizes active servers and unused capacity to enhance efficiency. We implement HAPSO in CloudSimPlus and evaluate it on workloads ranging from 500 to 14,000 VMs using realistic trace-driven simulations. Results show that HAPSO reduces energy consumption by 6.72 % on average (up to 10.0 %) and carbon emissions by 10.5 %, with savings peaking at 25.8 % in mid-scale workloads, compared to the ACO baseline, while maintaining SLA compliance. Statistical significance is confirmed via Friedman, Kendall’s W, and Wilcoxon signed-rank tests, with large effect sizes. These findings highlight HAPSO’s potential to support greener, sustainable cloud operations.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101258"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145614738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-29DOI: 10.1016/j.suscom.2025.101191
Abdennabi Morchid , Ishaq G. Muhammad Alblushi , Haris M. Khalid , Hassan Qjidaa , Rachid El Alami
Modern agriculture faces significant challenges related to water scarcity and the impacts of climate change. To ensure crop sustainability and food security, irrigation systems must be optimized. Fuzzy logic and the Internet of Things (IoT) are two cutting-edge approaches to intelligent irrigation management that adjust water delivery to plants' real needs. Conventional irrigation techniques are wasteful and ineffective. Fuzzy logic and the IoT have exciting opportunities, but integrating them presents difficulties, especially (1) concerning implementation, (2) cost, and (3) data security. In light of water shortage, food security, and sustainable development issues, this proposed article examines how IoT and fuzzy logic might be used to create smart irrigation systems. It evaluates contemporary methods for optimizing water management using fuzzy logic and the IoT, as well as the effects of climate change on irrigation. While addressing the challenges of installation costs, implementation complexity, communication reliability, and data security, the proposed review highlights the benefits of these technologies, including reduced water consumption, increased agricultural yields, automation, and environmental adaptability. The main topics of this review's final section, including the integration of new, cutting-edge technology, enhanced decision-making models, and the adoption of sustainable solutions for more resilient and effective agriculture, also address potential directions for future research. importance of the research. Due to water constraints and climate change, this study highlights the importance of intelligent irrigation systems. It showcases creative methods to maximize water management and raise agricultural productivity by fusing IoT with fuzzy logic.
{"title":"Integrating IoT and fuzzy logic for intelligent irrigation in sustainable agriculture for improving water scarcity: Benefits and challenges","authors":"Abdennabi Morchid , Ishaq G. Muhammad Alblushi , Haris M. Khalid , Hassan Qjidaa , Rachid El Alami","doi":"10.1016/j.suscom.2025.101191","DOIUrl":"10.1016/j.suscom.2025.101191","url":null,"abstract":"<div><div>Modern agriculture faces significant challenges related to water scarcity and the impacts of climate change. To ensure crop sustainability and food security, irrigation systems must be optimized. Fuzzy logic and the Internet of Things (IoT) are two cutting-edge approaches to intelligent irrigation management that adjust water delivery to plants' real needs. Conventional irrigation techniques are wasteful and ineffective. Fuzzy logic and the IoT have exciting opportunities, but integrating them presents difficulties, especially (1) concerning implementation, (2) cost, and (3) data security. In light of water shortage, food security, and sustainable development issues, this proposed article examines how IoT and fuzzy logic might be used to create smart irrigation systems. It evaluates contemporary methods for optimizing water management using fuzzy logic and the IoT, as well as the effects of climate change on irrigation. While addressing the challenges of installation costs, implementation complexity, communication reliability, and data security, the proposed review highlights the benefits of these technologies, including reduced water consumption, increased agricultural yields, automation, and environmental adaptability. The main topics of this review's final section, including the integration of new, cutting-edge technology, enhanced decision-making models, and the adoption of sustainable solutions for more resilient and effective agriculture, also address potential directions for future research. importance of the research. Due to water constraints and climate change, this study highlights the importance of intelligent irrigation systems. It showcases creative methods to maximize water management and raise agricultural productivity by fusing IoT with fuzzy logic.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101191"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144926276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-12DOI: 10.1016/j.suscom.2025.101202
Salim El khediri , Pascal Lorenz
Cluster-based routing has been effective for facing the unique problems of Wireless Sensor Networks such as handling energy consumption and forwarding data in large, limited resource environments. Based on how the Aurora Borealis changes over time, this paper proposes the Aurora-Based Clustering Protocol which relies on virtual electrical drift and quantum tunneling to select flexible clusters and their heads. According to ABCP, a sensor node is represented by a charged particle and its virtual charge is measured by considering remaining energy and nearby data amounts. Nodes in the network are linked by streamlines created with magnetic-inspired methods and cluster heads are selected randomly using a fitness model that aims for both balance and central locations. It offers support for changing network arrangements and arranges paths so that communication is efficient wherever and whenever users move. ABCP was tested by running multiple simulations with a network of 300 nodes which reflects how a WSN might be used in real life. Against standard approaches such as LEACH, BeeCluster, iABC and PSO-based schemes, ABCP saves up to 28.7% more energy and adds at least 17.4% to the network’s lifetime under varying and densely packed node conditions.
{"title":"Energy-efficient communication in WSNs using ABCP: An Aurora and quantum tunneling approach","authors":"Salim El khediri , Pascal Lorenz","doi":"10.1016/j.suscom.2025.101202","DOIUrl":"10.1016/j.suscom.2025.101202","url":null,"abstract":"<div><div>Cluster-based routing has been effective for facing the unique problems of Wireless Sensor Networks such as handling energy consumption and forwarding data in large, limited resource environments. Based on how the Aurora Borealis changes over time, this paper proposes the Aurora-Based Clustering Protocol which relies on virtual electrical drift and quantum tunneling to select flexible clusters and their heads. According to ABCP, a sensor node is represented by a charged particle and its virtual charge is measured by considering remaining energy and nearby data amounts. Nodes in the network are linked by streamlines created with magnetic-inspired methods and cluster heads are selected randomly using a fitness model that aims for both balance and central locations. It offers support for changing network arrangements and arranges paths so that communication is efficient wherever and whenever users move. ABCP was tested by running multiple simulations with a network of 300 nodes which reflects how a WSN might be used in real life. Against standard approaches such as LEACH, BeeCluster, iABC and PSO-based schemes, ABCP saves up to 28.7% more energy and adds at least 17.4% to the network’s lifetime under varying and densely packed node conditions.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101202"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-13DOI: 10.1016/j.suscom.2025.101200
Guilin He, Min Lei, Lei Han, Peifa Shan, Ruipeng Chen
Adopting the dynamic digital twin (DDT) model in smart grid distribution networks is a revolutionary breakthrough toward advanced dynamic energy management and control. However, even the most advanced systems fail to describe static architectural configuration adequately or they do not offer sufficient automation in this process, they are unable to handle dynamic interactions or topological hierarchy. To overcome such restrictions, this research presents a new framework for building DDT models based on Graph Neural Networks (GNNs). GNNs outperform other deep learning models when it comes to modeling graph-structured data which has application in modeling nodes and edges of smart grids. The adopted model expands the critical technical parameters' achievements and indicates a high Voltage Regulation Efficiency of 92 % and Network Efficiency belonging to 95 %; therefore, the distribution of power and operation reliability is considered optimal. The advantage of these findings is also echoed by the Voltage Profile Deviation of 0.015 p.u. and the Power Loss Reduction of 18.3 % which suggest that the proposed method offers better voltage profile stability and less energy losses than existing static models. The usefulness and applicability of the framework can be shown by performing experiments in MATLAB Simulink and Python-based libraries such as PyTorch Geometric. This study provides a novel approach to address issues in applied research and provides the basis for further advancements in realistic digital twin applications concerning smart grids.
{"title":"Energy-efficient smart grid operations through dynamic digital twin models and deep learning","authors":"Guilin He, Min Lei, Lei Han, Peifa Shan, Ruipeng Chen","doi":"10.1016/j.suscom.2025.101200","DOIUrl":"10.1016/j.suscom.2025.101200","url":null,"abstract":"<div><div>Adopting the dynamic digital twin (DDT) model in smart grid distribution networks is a revolutionary breakthrough toward advanced dynamic energy management and control. However, even the most advanced systems fail to describe static architectural configuration adequately or they do not offer sufficient automation in this process, they are unable to handle dynamic interactions or topological hierarchy. To overcome such restrictions, this research presents a new framework for building DDT models based on Graph Neural Networks (GNNs). GNNs outperform other deep learning models when it comes to modeling graph-structured data which has application in modeling nodes and edges of smart grids. The adopted model expands the critical technical parameters' achievements and indicates a high Voltage Regulation Efficiency of 92 % and Network Efficiency belonging to 95 %; therefore, the distribution of power and operation reliability is considered optimal. The advantage of these findings is also echoed by the Voltage Profile Deviation of 0.015 p.u. and the Power Loss Reduction of 18.3 % which suggest that the proposed method offers better voltage profile stability and less energy losses than existing static models. The usefulness and applicability of the framework can be shown by performing experiments in MATLAB Simulink and Python-based libraries such as PyTorch Geometric. This study provides a novel approach to address issues in applied research and provides the basis for further advancements in realistic digital twin applications concerning smart grids.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101200"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-17DOI: 10.1016/j.suscom.2025.101234
Mukhlesur Rahman , Md. Abul Kalam , Matta Mani Sankar
This paper presents a novel approach for the optimal allocation of renewable distributed generation systems (RDGs) in distribution network system (DNs) using the mountain gazelle optimization (MGO) algorithm. Suboptimal allocation of RDGs in DNs can lead to counterproductive results, including increased power losses and protection issues. Therefore, it is crucial to determine the optimal size and placement of RDGs to enhance overall performance of the DNs. Addressing potential challenges posed by evolving energy landscapes, this research work underscores the importance of proactively planning and integrating wind turbine and solar photovoltaic-based RDGs units. A crucial facet of this methodology is the incorporation of uncertainties associated with wind and solar power generation, coupled with a realistic load model that varies with time and comprises residential, commercial, and industrial demand profiles. Inspired by the adaptive behaviour of mountain gazelles, the MGO algorithm effectively determines the optimal locations and sizes of RDG units. The MGO algorithm achieves a superior balance between exploration and exploitation compared to various meta-heuristic algorithms, resulting in more optimal solutions. Simulation results on 33-bus and 69-bus test systems validate the effectiveness of the proposed approach. In the 33-bus system, energy loss is reduced by 78.47 % and in the 69-bus system, energy loss is reduced by 92.09 %. These results highlight MGO’s potential as a robust and effective solution for RDGs allocation in DNs, outperforming existing optimization techniques.
{"title":"A novel approach for optimal allocation of renewable distributed generation systems in distribution networks employing mountain gazelle optimization algorithm","authors":"Mukhlesur Rahman , Md. Abul Kalam , Matta Mani Sankar","doi":"10.1016/j.suscom.2025.101234","DOIUrl":"10.1016/j.suscom.2025.101234","url":null,"abstract":"<div><div>This paper presents a novel approach for the optimal allocation of renewable distributed generation systems (RDGs) in distribution network system (DNs) using the mountain gazelle optimization (MGO) algorithm. Suboptimal allocation of RDGs in DNs can lead to counterproductive results, including increased power losses and protection issues. Therefore, it is crucial to determine the optimal size and placement of RDGs to enhance overall performance of the DNs. Addressing potential challenges posed by evolving energy landscapes, this research work underscores the importance of proactively planning and integrating wind turbine and solar photovoltaic-based RDGs units. A crucial facet of this methodology is the incorporation of uncertainties associated with wind and solar power generation, coupled with a realistic load model that varies with time and comprises residential, commercial, and industrial demand profiles. Inspired by the adaptive behaviour of mountain gazelles, the MGO algorithm effectively determines the optimal locations and sizes of RDG units. The MGO algorithm achieves a superior balance between exploration and exploitation compared to various meta-heuristic algorithms, resulting in more optimal solutions. Simulation results on 33-bus and 69-bus test systems validate the effectiveness of the proposed approach. In the 33-bus system, energy loss is reduced by 78.47 % and in the 69-bus system, energy loss is reduced by 92.09 %. These results highlight MGO’s potential as a robust and effective solution for RDGs allocation in DNs, outperforming existing optimization techniques.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101234"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145362780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing task scheduling is not only the foundation for ensuring the efficient operation of the cloud platform, but also an important means of improving service quality and reducing costs. With the continuous development of cloud computing technology, the requirements for intelligent and automated task scheduling are also increasing. To address the demand for more efficient and flexible computations, an enhanced honey badger algorithm (HBA) utilizing two dimensional and three dimensional fractals is introduced. The digging phase of the honey badger's foraging strategy is improved by using the mathematical expressions of two dimensional and three dimensional fractals in rectangular and polar coordinates, which enhances the algorithm's performance while speeding up its convergence. The optimal solution HBACBKS-Z was selected by verification on the benchmark functions. The optimization problem of task scheduling in cloud computing systems is divided into large-scale task scheduling and small-scale task scheduling. Experiments were conducted in these two cases by using HBACBKS-Z and other traditional swarm intelligence optimization algorithms. It has been proved that HBACBKS-Z has significant advantages in terms of total cost, time cost, load cost and price cost, and can effectively solve the task scheduling optimization problem of cloud computing systems of various sizes.
{"title":"Task scheduling in cloud computing system by improved honey badger optimization algorithm with two dimensional and three dimensional fractals","authors":"Yu-Feng Sun, Si-Wen Zhang, Jie-Sheng Wang, Shi-Hui Zhang, Yu-Cai Wang, Xiao-Fei Sui","doi":"10.1016/j.suscom.2025.101201","DOIUrl":"10.1016/j.suscom.2025.101201","url":null,"abstract":"<div><div>Cloud computing task scheduling is not only the foundation for ensuring the efficient operation of the cloud platform, but also an important means of improving service quality and reducing costs. With the continuous development of cloud computing technology, the requirements for intelligent and automated task scheduling are also increasing. To address the demand for more efficient and flexible computations, an enhanced honey badger algorithm (HBA) utilizing two dimensional and three dimensional fractals is introduced. The digging phase of the honey badger's foraging strategy is improved by using the mathematical expressions of two dimensional and three dimensional fractals in rectangular and polar coordinates, which enhances the algorithm's performance while speeding up its convergence. The optimal solution HBACBKS-Z was selected by verification on the benchmark functions. The optimization problem of task scheduling in cloud computing systems is divided into large-scale task scheduling and small-scale task scheduling. Experiments were conducted in these two cases by using HBACBKS-Z and other traditional swarm intelligence optimization algorithms. It has been proved that HBACBKS-Z has significant advantages in terms of total cost, time cost, load cost and price cost, and can effectively solve the task scheduling optimization problem of cloud computing systems of various sizes.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"48 ","pages":"Article 101201"},"PeriodicalIF":5.7,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145004812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}