The revolutionizing influence of the Internet of Things (IoT) paradigm has greatly enhanced the service-delivery aspects of electricity consumption, allowing for smart energy distribution and trustworthy electric appliances. The current research presents a novel technique for detecting electricity usage in smart homes using IoT technology. Poor electricity distribution has greatly impacted daily life along with inefficient power resource allocation. This research assesses the spatial–temporal efficiency with which power grid operators distribute electrical energy resources. The efficient distribution of energy resources is achieved by calculating the spatial–temporal utilization measure for each residence of a geographical region. Also, to help power grid managers optimize the spatial–temporal allocation of energy resources, a two-level threshold-based decision-tree model is presented. For performance assessment, four smart homes are tracked for 2 months in a simulated environment. Statistical results acquired for Delay (119.61s), Reliability (82.23%), Stability (71.12%), Classification Effectiveness (Precision (95.56%), Sensitivity (95.96%), and Specificity (95.25%)), and Decision-making Efficiency (92.21%) show that the presented approach significantly outperforms state-of-the-art data analysis techniques.
{"title":"Bi-level decision tree-based smart electricity analysis framework for sustainable city","authors":"Tariq Ahamed Ahanger , Munish Bhatia , Abdullah Albanyan , Abdulrahman Alabduljabbar","doi":"10.1016/j.suscom.2024.101069","DOIUrl":"10.1016/j.suscom.2024.101069","url":null,"abstract":"<div><div>The revolutionizing influence of the Internet of Things (IoT) paradigm has greatly enhanced the service-delivery aspects of electricity consumption, allowing for smart energy distribution and trustworthy electric appliances. The current research presents a novel technique for detecting electricity usage in smart homes using IoT technology. Poor electricity distribution has greatly impacted daily life along with inefficient power resource allocation. This research assesses the spatial–temporal efficiency with which power grid operators distribute electrical energy resources. The efficient distribution of energy resources is achieved by calculating the spatial–temporal utilization measure for each residence of a geographical region. Also, to help power grid managers optimize the spatial–temporal allocation of energy resources, a two-level threshold-based decision-tree model is presented. For performance assessment, four smart homes are tracked for 2 months in a simulated environment. Statistical results acquired for Delay (119.61s), Reliability (82.23%), Stability (71.12%), Classification Effectiveness (Precision (95.56%), Sensitivity (95.96%), and Specificity (95.25%)), and Decision-making Efficiency (92.21%) show that the presented approach significantly outperforms state-of-the-art data analysis techniques.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101069"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2024.101080
Amir Masoud Rahmani , Amir Haider , Parisa Khoshvaght , Farhad Soleimanian Gharehchopogh , Komeil Moghaddasi , Shakiba Rajabi , Mehdi Hosseinzadeh
The Internet of Things (IoT) significantly impacts various industries, enabling better connectivity and real-time data exchange for applications ranging from smart cities to healthcare. Integrating cloud, fog, and edge computing is essential for managing increased data and processing needs as IoT networks become complex. Cloud computing provides extensive storage and powerful computing capabilities but can experience delays due to the distance data must travel. Fog computing addresses these delays by processing data closer to its source, while edge computing reduces them even further by processing data directly on IoT devices. Effective management of these computing layers requires strategic task offloading, which involves moving tasks to the most appropriate computing layer to balance latency, energy consumption, and operational efficiency. Several strategies have been developed to optimize network communication and task offloading, with metaheuristic algorithms emerging as promising approaches. Inspired by natural processes, these algorithms are skilled at searching complex spaces to find near-optimal solutions for efficient and dynamic task offloading. This review provides a detailed analysis of how metaheuristic algorithms optimize task offloading. It evaluates their effectiveness in improving system performance, managing resources, and reducing costs. The review also identifies the current challenges in this area and suggests future research directions to advance this field.
{"title":"Optimizing task offloading with metaheuristic algorithms across cloud, fog, and edge computing networks: A comprehensive survey and state-of-the-art schemes","authors":"Amir Masoud Rahmani , Amir Haider , Parisa Khoshvaght , Farhad Soleimanian Gharehchopogh , Komeil Moghaddasi , Shakiba Rajabi , Mehdi Hosseinzadeh","doi":"10.1016/j.suscom.2024.101080","DOIUrl":"10.1016/j.suscom.2024.101080","url":null,"abstract":"<div><div>The Internet of Things (IoT) significantly impacts various industries, enabling better connectivity and real-time data exchange for applications ranging from smart cities to healthcare. Integrating cloud, fog, and edge computing is essential for managing increased data and processing needs as IoT networks become complex. Cloud computing provides extensive storage and powerful computing capabilities but can experience delays due to the distance data must travel. Fog computing addresses these delays by processing data closer to its source, while edge computing reduces them even further by processing data directly on IoT devices. Effective management of these computing layers requires strategic task offloading, which involves moving tasks to the most appropriate computing layer to balance latency, energy consumption, and operational efficiency. Several strategies have been developed to optimize network communication and task offloading, with metaheuristic algorithms emerging as promising approaches. Inspired by natural processes, these algorithms are skilled at searching complex spaces to find near-optimal solutions for efficient and dynamic task offloading. This review provides a detailed analysis of how metaheuristic algorithms optimize task offloading. It evaluates their effectiveness in improving system performance, managing resources, and reducing costs. The review also identifies the current challenges in this area and suggests future research directions to advance this field.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101080"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2024.101063
Di Yuan, Yue Wang
This paper addresses the challenges of resource allocation and inventory management in supply chain systems by constructing an intelligent supply chain optimization model based on Deep Q-Networks (ISCO-DQ), emphasizing eco-efficiency. Initially, the study builds a supply chain model that incorporates supplier-customer relationships, guided by the principles of green computing to minimize environmental impact. The model applies Markov Decision Processes to develop a framework for sustainable supplier inventory control, focusing on reducing waste and optimizing resource usage. Utilizing the function approximation capabilities of Deep Q-Networks, the model not only achieves intelligent resource allocation but also prioritizes energy-efficient practices in inventory management. Experimental results indicate that the ISCO-DQ inventory control model converges to approximately −41,400 and −181,300 after around 100 and 300 cycles, respectively, under customer demand distributions that follow normal distributions. Furthermore, compared to traditional single-period stochastic and fixed-order quantity inventory control models, the total cost of the ISCO-DQ model is reduced by an average of 6.7 % and 16 %, respectively, while minimizing carbon emissions associated with overproduction and excess inventory. Additionally, the ISCO-DQ model significantly mitigates costs arising from demand uncertainty by quickly adapting to fluctuations and optimizing inventory strategies, thereby fostering a circular economy. This demonstrates that the ISCO-DQ inventory control model effectively addresses inefficiencies, inflexibility, and suboptimal resource allocation in conventional supply chain management, ultimately promoting sustainable development and environmental stewardship for enterprises.
{"title":"Sustainable supply chain management: A green computing approach using deep Q-networks","authors":"Di Yuan, Yue Wang","doi":"10.1016/j.suscom.2024.101063","DOIUrl":"10.1016/j.suscom.2024.101063","url":null,"abstract":"<div><div>This paper addresses the challenges of resource allocation and inventory management in supply chain systems by constructing an intelligent supply chain optimization model based on Deep Q-Networks (ISCO-DQ), emphasizing eco-efficiency. Initially, the study builds a supply chain model that incorporates supplier-customer relationships, guided by the principles of green computing to minimize environmental impact. The model applies Markov Decision Processes to develop a framework for sustainable supplier inventory control, focusing on reducing waste and optimizing resource usage. Utilizing the function approximation capabilities of Deep Q-Networks, the model not only achieves intelligent resource allocation but also prioritizes energy-efficient practices in inventory management. Experimental results indicate that the ISCO-DQ inventory control model converges to approximately −41,400 and −181,300 after around 100 and 300 cycles, respectively, under customer demand distributions that follow normal distributions. Furthermore, compared to traditional single-period stochastic and fixed-order quantity inventory control models, the total cost of the ISCO-DQ model is reduced by an average of 6.7 % and 16 %, respectively, while minimizing carbon emissions associated with overproduction and excess inventory. Additionally, the ISCO-DQ model significantly mitigates costs arising from demand uncertainty by quickly adapting to fluctuations and optimizing inventory strategies, thereby fostering a circular economy. This demonstrates that the ISCO-DQ inventory control model effectively addresses inefficiencies, inflexibility, and suboptimal resource allocation in conventional supply chain management, ultimately promoting sustainable development and environmental stewardship for enterprises.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101063"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2024.101079
Yang Wang , Pai Pang , Buyang Qi , Xianan Wang , Zhenghui Zhao
This paper addresses the challenges posed by reduced power system inertia due to the large-scale renewable energy integration and the threats from frequent extreme weather events. It proposes a strategy to enhance power system resilience by incorporating inertia participation during such events. The strategy derives critical inertia demand formulas based on two key factors under extreme weather, establishing a linearized inertia assessment model. Additionally, considering the vulnerability of power lines to extreme weather events, we propose the Resilience Reserve Factor (RRF). It employs three resilience evaluation indexes to delineate the system's demand for inertia supply, efficiently targeting vulnerable areas for inertia reinforcement, thereby comprehensively enhancing the resilience of the power grid. Lastly, based on the critical inertia demand constraint criterion, we establish a two-stage pre-scheduling strategy incorporating both day-ahead planning and real-time correction while considering assessment accuracy. This approach transforms the inertia assessment problem into a resilience optimization problem, yielding the scheduling status of each generator unit and inertia replenishment results during extreme weather after iteration. The optimized strategy is validated through simulations on the improved IEEE39 buses system. Furthermore, this study employs a frequency response model to investigate the spatial distribution characteristics of inertia. The results indicate that this optimization strategy enables efficient scheduling of resources before and after extreme weather events. In addition to improving the economic performance of the power system, it significantly enhances system resilience by reinforcing both global and localized support during critical disaster resistance phases.
{"title":"A two-stage optimal pre-scheduling strategy for power system inertia assessment and replenishment under extreme weather events","authors":"Yang Wang , Pai Pang , Buyang Qi , Xianan Wang , Zhenghui Zhao","doi":"10.1016/j.suscom.2024.101079","DOIUrl":"10.1016/j.suscom.2024.101079","url":null,"abstract":"<div><div>This paper addresses the challenges posed by reduced power system inertia due to the large-scale renewable energy integration and the threats from frequent extreme weather events. It proposes a strategy to enhance power system resilience by incorporating inertia participation during such events. The strategy derives critical inertia demand formulas based on two key factors under extreme weather, establishing a linearized inertia assessment model. Additionally, considering the vulnerability of power lines to extreme weather events, we propose the Resilience Reserve Factor (RRF). It employs three resilience evaluation indexes to delineate the system's demand for inertia supply, efficiently targeting vulnerable areas for inertia reinforcement, thereby comprehensively enhancing the resilience of the power grid. Lastly, based on the critical inertia demand constraint criterion, we establish a two-stage pre-scheduling strategy incorporating both day-ahead planning and real-time correction while considering assessment accuracy. This approach transforms the inertia assessment problem into a resilience optimization problem, yielding the scheduling status of each generator unit and inertia replenishment results during extreme weather after iteration. The optimized strategy is validated through simulations on the improved IEEE39 buses system. Furthermore, this study employs a frequency response model to investigate the spatial distribution characteristics of inertia. The results indicate that this optimization strategy enables efficient scheduling of resources before and after extreme weather events. In addition to improving the economic performance of the power system, it significantly enhances system resilience by reinforcing both global and localized support during critical disaster resistance phases.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101079"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fog computing is a distributed computing paradigm that has become essential for driving Internet of Things (IoT) applications due to its ability to meet the low latency requirements of increasing IoT applications. However, fog servers can become overburdened as many IoT applications need to run on these resources, potentially leading to decreased responsiveness. Additionally, the need to handle real-world challenges such as load instability, makespan, and underutilization of virtual machine (VM) devices has driven an exponential increase in demand for effective task scheduling in IoT-based fog and cloud computing environments. Therefore, scheduling IoT applications in heterogeneous fog computing systems effectively and flexibly is crucial. The limited processing resources of fog servers make the application of ideal but computationally costly procedures more challenging. To address these difficulties, we propose using an Arithmetic Optimization Algorithm (AOA) for task scheduling and a Markov chain to forecast the load of VMs as fog and cloud layer resources. This approach aims to establish an environmentally load-balanced framework that reduces energy usage and delay. The simulation results indicate that the proposed method can improve the average makespan, delay, and Performance Improvement Rate (PIR) by 8.29 %, 11.72 %, and 4.66 %, respectively, compared to the crow, firefly, and grey wolf algorithms (GWA).
{"title":"Hybrid Markov chain-based dynamic scheduling to improve load balancing performance in fog-cloud environment","authors":"Navid Khaledian , Shiva Razzaghzadeh , Zeynab Haghbayan , Marcus Völp","doi":"10.1016/j.suscom.2024.101077","DOIUrl":"10.1016/j.suscom.2024.101077","url":null,"abstract":"<div><div>Fog computing is a distributed computing paradigm that has become essential for driving Internet of Things (IoT) applications due to its ability to meet the low latency requirements of increasing IoT applications. However, fog servers can become overburdened as many IoT applications need to run on these resources, potentially leading to decreased responsiveness. Additionally, the need to handle real-world challenges such as load instability, makespan, and underutilization of virtual machine (VM) devices has driven an exponential increase in demand for effective task scheduling in IoT-based fog and cloud computing environments. Therefore, scheduling IoT applications in heterogeneous fog computing systems effectively and flexibly is crucial. The limited processing resources of fog servers make the application of ideal but computationally costly procedures more challenging. To address these difficulties, we propose using an Arithmetic Optimization Algorithm (AOA) for task scheduling and a Markov chain to forecast the load of VMs as fog and cloud layer resources. This approach aims to establish an environmentally load-balanced framework that reduces energy usage and delay. The simulation results indicate that the proposed method can improve the average makespan, delay, and Performance Improvement Rate (PIR) by 8.29 %, 11.72 %, and 4.66 %, respectively, compared to the crow, firefly, and grey wolf algorithms (GWA).</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101077"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increased demand of power in distribution networks (DN) driven by various sectors like industrial, commercial, municipal, residential, and irrigation necessitates alternative solutions such as Distributed Generators (DGs), capacitors, and Network Reconfiguration (NR). Addressing this challenge involves optimizing the opening of tie line switches and determining the optimal placement and capacity of capacitors and DGs, which poses a complex optimization problem involving both discrete and continuous variables. To tackle this, an Adaptive Quantum-inspired Evolutionary Algorithm (AQiEA), combining principles from Quantum computing and Evolutionary Algorithms, is employed. This study emphasizes holistic benefits, specifically aiming to maximize economic gains in the distribution system with the installation of DGs, capacitors, and NR along with minimization of power losses. In this paper, two cases are explored. In the first case, seven scenarios’ analyses system losses with load variations, each scenario running twenty-five independent iterations. Performance metrics has been computed to reveal that simultaneous implementation of NR, DGs, and capacitors significantly reduces power losses compared to independent implementations. The second case introduces an additional objective of maximizing economic benefits. This involves considering factors like DG and capacitor location, capacity, line losses, and various costs such as operational, maintenance, and installation costs. The results tabulated in paper demonstrate that operating DGs in parallel with capacitors and NR not only minimizes power losses but also maximizes distribution utilities' profits.
{"title":"Implementation of distributed energy resources along with Network Reconfiguration for cost-benefit analysis","authors":"G. Manikanta , Ashish Mani , Anjali Jain , Ramya Kuppusamy , Yuvaraja Teekaraman","doi":"10.1016/j.suscom.2024.101078","DOIUrl":"10.1016/j.suscom.2024.101078","url":null,"abstract":"<div><div>Increased demand of power in distribution networks (DN) driven by various sectors like industrial, commercial, municipal, residential, and irrigation necessitates alternative solutions such as Distributed Generators (DGs), capacitors, and Network Reconfiguration (NR). Addressing this challenge involves optimizing the opening of tie line switches and determining the optimal placement and capacity of capacitors and DGs, which poses a complex optimization problem involving both discrete and continuous variables. To tackle this, an Adaptive Quantum-inspired Evolutionary Algorithm (AQiEA), combining principles from Quantum computing and Evolutionary Algorithms, is employed. This study emphasizes holistic benefits, specifically aiming to maximize economic gains in the distribution system with the installation of DGs, capacitors, and NR along with minimization of power losses. In this paper, two cases are explored. In the first case, seven scenarios’ analyses system losses with load variations, each scenario running twenty-five independent iterations. Performance metrics has been computed to reveal that simultaneous implementation of NR, DGs, and capacitors significantly reduces power losses compared to independent implementations. The second case introduces an additional objective of maximizing economic benefits. This involves considering factors like DG and capacitor location, capacity, line losses, and various costs such as operational, maintenance, and installation costs. The results tabulated in paper demonstrate that operating DGs in parallel with capacitors and NR not only minimizes power losses but also maximizes distribution utilities' profits.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101078"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2025.101082
C. Srinivasan , C. Sheeba Joice
The rising popularity of electric vehicles (EVs) stems from their enhanced performance and environmental benefits. A critical challenge exists in optimizing the performance and extending the battery life of EVs, which depends on the accurate prediction of State of Charge (SOC) and State of Health (SOH). The Battery Management Systems (BMS) is essential for an EV’s Energy Management System (EMS). The current methodologies often fail to achieve the required precision, leading to suboptimal BMS that can compromise EV efficiency and reliability. To address these challenges, a merged SOC and SOH prediction approach is proposed. To maximize prediction accuracy, a hybrid Deep Learning (DL) model incorporating bio-inspired optimization algorithms such as Elephant Herding Optimization (EHO), Honey Badger Optimization (HBO), and Moth-Flame Optimization (MFO) is utilized. The architecture comprises two Convolutional Neural Networks (CNN) and an Autoencoder (AE), integrated with a Bidirectional Long Short-Term Memory (BLSTM) layer and a single Long Short-Term Memory (LSTM) layer for encoding and decoding tasks. The three optimized hybrid DL models were validated using standard benchmark datasets such as the Oxford Battery Aging Dataset, NASA, and CALCE. The prediction results of the merged SOC and SOH prediction from the three bio-inspired hybrid DL models were compared with those of the separate SOC prediction technique. The results of the merged SOC and SOH predictions were compared with traditional separate SOC prediction techniques, demonstrating superior performance. Notably, the HBO-Hybrid DL model achieved the highest R-squared (R2) values of 0.991 for SOC and 0.996 for SOH
{"title":"Bio-inspired optimizer with deep learning model for energy management system in electric vehicles","authors":"C. Srinivasan , C. Sheeba Joice","doi":"10.1016/j.suscom.2025.101082","DOIUrl":"10.1016/j.suscom.2025.101082","url":null,"abstract":"<div><div>The rising popularity of electric vehicles (EVs) stems from their enhanced performance and environmental benefits. A critical challenge exists in optimizing the performance and extending the battery life of EVs, which depends on the accurate prediction of State of Charge (SOC) and State of Health (SOH). The Battery Management Systems (BMS) is essential for an EV’s Energy Management System (EMS). The current methodologies often fail to achieve the required precision, leading to suboptimal BMS that can compromise EV efficiency and reliability. To address these challenges, a merged SOC and SOH prediction approach is proposed. To maximize prediction accuracy, a hybrid Deep Learning (DL) model incorporating bio-inspired optimization algorithms such as Elephant Herding Optimization (EHO), Honey Badger Optimization (HBO), and Moth-Flame Optimization (MFO) is utilized. The architecture comprises two Convolutional Neural Networks (CNN) and an Autoencoder (AE), integrated with a Bidirectional Long Short-Term Memory (BLSTM) layer and a single Long Short-Term Memory (LSTM) layer for encoding and decoding tasks. The three optimized hybrid DL models were validated using standard benchmark datasets such as the Oxford Battery Aging Dataset, NASA, and CALCE. The prediction results of the merged SOC and SOH prediction from the three bio-inspired hybrid DL models were compared with those of the separate SOC prediction technique. The results of the merged SOC and SOH predictions were compared with traditional separate SOC prediction techniques, demonstrating superior performance. Notably, the HBO-Hybrid DL model achieved the highest R-squared (R2) values of 0.991 for SOC and 0.996 for SOH</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101082"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless Sensor Network (WSN) security and energy consumption is a potential issue. WSN plays an important role in networking technologies to handle edge devices on a heterogeneous edge computing platform. For faster processing of sensor nodes on an Industrial Internet of Everything (IIOE), an efficient computing technique for an emerging networking technology is being explored. As a result, the proposed study provides a chaotic mud ring-based elliptic curve cryptographic (CMR_ECC)-based encryption solution for WSN security. In the proposed WSN environment, various sensor nodes are deployed to collect data. To enhance the network lifetime, the nodes are combined into clusters, and the selection of cluster heads is performed with a fuzzy logic-based osprey algorithm (FL_OA). After the encryption process, the most optimal key selection process is performed with a hybrid chaotic mud ring algorithm, and the encrypted data are optimally routed to varied edge servers with a hybrid Chebyshev Gannet Optimization (CGO) approach. The data aggregation is performed with a Q-reinforcement learning approach. The proposed work is implemented with MATLAB. For 500, 750, and 1000 WSN sensor nodes, the proposed technique resulted in energy consumption values of 0.28780005 mJ, 0.31141 mJ, and 0.339419 mJ, respectively.
{"title":"Secured energy optimization of wireless sensor nodes on edge computing platform using hybrid data aggregation scheme and Q-based reinforcement learning technique","authors":"Rupa Kesavan , Yaashuwanth Calpakkam , Prathibanandhi Kanagaraj , Vijayaraja Loganathan","doi":"10.1016/j.suscom.2024.101072","DOIUrl":"10.1016/j.suscom.2024.101072","url":null,"abstract":"<div><div>Wireless Sensor Network (WSN) security and energy consumption is a potential issue. WSN plays an important role in networking technologies to handle edge devices on a heterogeneous edge computing platform. For faster processing of sensor nodes on an Industrial Internet of Everything (IIOE), an efficient computing technique for an emerging networking technology is being explored. As a result, the proposed study provides a chaotic mud ring-based elliptic curve cryptographic (CMR_ECC)-based encryption solution for WSN security. In the proposed WSN environment, various sensor nodes are deployed to collect data. To enhance the network lifetime, the nodes are combined into clusters, and the selection of cluster heads is performed with a fuzzy logic-based osprey algorithm (FL_OA). After the encryption process, the most optimal key selection process is performed with a hybrid chaotic mud ring algorithm, and the encrypted data are optimally routed to varied edge servers with a hybrid Chebyshev Gannet Optimization (CGO) approach. The data aggregation is performed with a Q-reinforcement learning approach. The proposed work is implemented with MATLAB. For 500, 750, and 1000 WSN sensor nodes, the proposed technique resulted in energy consumption values of 0.28780005 mJ, 0.31141 mJ, and 0.339419 mJ, respectively.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101072"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2024.101076
Sneha Pokharkar , Mahesh D. Goudar , Vrushali Waghmare
Due to high maintenance costs and inaccessibility, replacing batteries regularly is a major difficulty for Wireless Sensor Nodes (WSNs) in remote locations. Harvesting energy from multiple resources like sun, wind, thermal, and vibration is one option. Because of its plentiful availability, solar energy harvesting is the finest alternative among them. The battery gets charged during the day by solar energy, and while solar energy is unavailable, the system is powered by the charge stored in the battery. Hence, in this paper, a highly efficient Solar Energy Harvesting (SEH) system is proposed using Leadership Promoted Wild Horse Optimizer (LPWHO). LPWHO refers to the conceptual improvement of the standard Wild Horse optimization (WHO) algorithm. This research is going to focus on overall harvesting efficiency which further depends on MPPT. MPPT is used as it extracts maximal power from the solar panels and reduces power loss. The usage of MPPT enhances the extracted power’s efficiency out of the solar panel when its voltages are out of sync. At last, the supremacy of the presented approach is proved with respect to varied measures.
{"title":"An MPPT integrated DC-DC boost converter for solar energy harvester using LPWHO approach","authors":"Sneha Pokharkar , Mahesh D. Goudar , Vrushali Waghmare","doi":"10.1016/j.suscom.2024.101076","DOIUrl":"10.1016/j.suscom.2024.101076","url":null,"abstract":"<div><div>Due to high maintenance costs and inaccessibility, replacing batteries regularly is a major difficulty for Wireless Sensor Nodes (WSNs) in remote locations. Harvesting energy from multiple resources like sun, wind, thermal, and vibration is one option. Because of its plentiful availability, solar energy harvesting is the finest alternative among them. The battery gets charged during the day by solar energy, and while solar energy is unavailable, the system is powered by the charge stored in the battery. Hence, in this paper, a highly efficient Solar Energy Harvesting (SEH) system is proposed using Leadership Promoted Wild Horse Optimizer (LPWHO). LPWHO refers to the conceptual improvement of the standard Wild Horse optimization (WHO) algorithm. This research is going to focus on overall harvesting efficiency which further depends on MPPT. MPPT is used as it extracts maximal power from the solar panels and reduces power loss. The usage of MPPT enhances the extracted power’s efficiency out of the solar panel when its voltages are out of sync. At last, the supremacy of the presented approach is proved with respect to varied measures.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101076"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud services have become indispensable in critical sectors such as healthcare, drones, digital twins, and autonomous vehicles, providing essential infrastructure for data processing and real-time analytics. These systems operate across multiple layers, including edge, fog, and cloud, requiring efficient resource management to ensure reliability and energy efficiency. However, increasing computational demands have led to rising energy consumption and frequent faults in cloud data centers. Inefficient task scheduling exacerbates these issues, causing resource overutilization, execution delays, and redundant processing. Current approaches struggle to optimize energy consumption, execution time, and fault tolerance simultaneously. While some methods offer partial solutions, they suffer from high computational complexity and fail to effectively balance the workloads or manage redundancy. Therefore, a comprehensive task scheduling solution is needed for mission-critical applications. In this article, we introduce a novel scheduling algorithm based on Mixed Integer Linear Programming (MILP) that optimizes task allocation across edge, fog, and cloud environments. Our solution reduces energy consumption, execution time, and failure rates while ensuring balanced distribution of computational loads across virtual machines. Additionally, it incorporates a fault tolerance mechanism that reduces the overlap between primary and backup tasks by distributing them across multiple availability zones. The scheduler’s efficiency is further enhanced by a custom-designed heuristic, ensuring scalability and practical applicability. The proposed MILP-based scheduler demonstrates significant average improvements over the best state-of-the-art algorithms evaluated. It achieves a 9.63% increase in task throughput, reduces energy consumption by 18.20%, shortens execution times by 9.35%, and lowers failure probabilities by 11.50% across all layers of the distributed cloud system. These results highlight the scheduler’s effectiveness in addressing key challenges in energy-efficient and reliable cloud computing for mission-critical applications.
{"title":"Improving energy efficiency and fault tolerance of mission-critical cloud task scheduling: A mixed-integer linear programming approach","authors":"Mohammadreza Saberikia , Hamed Farbeh , Mahdi Fazeli","doi":"10.1016/j.suscom.2024.101068","DOIUrl":"10.1016/j.suscom.2024.101068","url":null,"abstract":"<div><div>Cloud services have become indispensable in critical sectors such as healthcare, drones, digital twins, and autonomous vehicles, providing essential infrastructure for data processing and real-time analytics. These systems operate across multiple layers, including edge, fog, and cloud, requiring efficient resource management to ensure reliability and energy efficiency. However, increasing computational demands have led to rising energy consumption and frequent faults in cloud data centers. Inefficient task scheduling exacerbates these issues, causing resource overutilization, execution delays, and redundant processing. Current approaches struggle to optimize energy consumption, execution time, and fault tolerance simultaneously. While some methods offer partial solutions, they suffer from high computational complexity and fail to effectively balance the workloads or manage redundancy. Therefore, a comprehensive task scheduling solution is needed for mission-critical applications. In this article, we introduce a novel scheduling algorithm based on Mixed Integer Linear Programming (MILP) that optimizes task allocation across edge, fog, and cloud environments. Our solution reduces energy consumption, execution time, and failure rates while ensuring balanced distribution of computational loads across virtual machines. Additionally, it incorporates a fault tolerance mechanism that reduces the overlap between primary and backup tasks by distributing them across multiple availability zones. The scheduler’s efficiency is further enhanced by a custom-designed heuristic, ensuring scalability and practical applicability. The proposed MILP-based scheduler demonstrates significant average improvements over the best state-of-the-art algorithms evaluated. It achieves a 9.63% increase in task throughput, reduces energy consumption by 18.20%, shortens execution times by 9.35%, and lowers failure probabilities by 11.50% across all layers of the distributed cloud system. These results highlight the scheduler’s effectiveness in addressing key challenges in energy-efficient and reliable cloud computing for mission-critical applications.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101068"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}