Pub Date : 2025-01-01DOI: 10.1016/j.suscom.2024.101051
Ankica Barišić , Jácome Cunha , Ivan Ruchkin , Ana Moreira , João Araújo , Moharram Challenger , Dušan Savić , Vasco Amaral
Supporting sustainability through modelling and analysis has become an active area of research in Software Engineering. Therefore, it is important and timely to survey the current state of the art in sustainability in Cyber-Physical Systems (CPS), one of the most rapidly evolving classes of complex software systems. This work presents the findings of a Systematic Mapping Study (SMS) that aims to identify key primary studies reporting on CPS modelling approaches that address sustainability over the last 10 years. Our literature search retrieved 2209 papers, of which 104 primary studies were deemed relevant for a detailed characterisation. These studies were analysed based on nine research questions designed to extract information on sustainability attributes, methods, models/meta-models, metrics, processes, and tools used to improve the sustainability of CPS. These questions also aimed to gather data on domain-specific modelling approaches and relevant application domains. The final results report findings for each of our questions, highlight interesting correlations among them, and identify literature gaps worth investigating in the near future.
{"title":"Modelling sustainability in cyber–physical systems: A systematic mapping study","authors":"Ankica Barišić , Jácome Cunha , Ivan Ruchkin , Ana Moreira , João Araújo , Moharram Challenger , Dušan Savić , Vasco Amaral","doi":"10.1016/j.suscom.2024.101051","DOIUrl":"10.1016/j.suscom.2024.101051","url":null,"abstract":"<div><div>Supporting sustainability through modelling and analysis has become an active area of research in Software Engineering. Therefore, it is important and timely to survey the current state of the art in sustainability in Cyber-Physical Systems (CPS), one of the most rapidly evolving classes of complex software systems. This work presents the findings of a Systematic Mapping Study (SMS) that aims to identify key primary studies reporting on CPS modelling approaches that address sustainability <em>over the last 10 years</em>. Our literature search retrieved 2209 papers, of which 104 primary studies were deemed relevant for a detailed characterisation. These studies were analysed based on nine research questions designed to extract information on sustainability attributes, methods, models/meta-models, metrics, processes, and tools used to improve the sustainability of CPS. These questions also aimed to gather data on domain-specific modelling approaches and relevant application domains. The final results report findings for each of our questions, highlight interesting correlations among them, and identify literature gaps worth investigating in the near future.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101051"},"PeriodicalIF":3.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143135635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-27DOI: 10.1016/j.suscom.2024.101075
Kruti Sutariya , C. Menaka , Mohammad Shahid , Sneha Kashyap , Deeksha Choudhary , Sumitra Padmanabhan
The agricultural industry is critical to guaranteeing food security and sustainability, yet technological improvements have created new opportunities for enhancing farming operations. Nano-grids, or small-scale decentralized energy systems, are a viable response to agriculture's energy challenges.This study aims to investigate the integration of AI technologies into cloud computing frameworks to empower agricultural nano-grids. We propose Dragon Fruit Fly Optimization algorithms (D-FF) for energy management in Nano-grids operations with sustainable farming technology.The proposed approach's efficacy is evaluated using simulations and real-world situations in agricultural environments.The results show that the nano-grid supports agricultural activities as well as improves Accuracy (96 %), F1-Score (93 %), Precision (91 %), and Recall (92 %) with less energy wasted along with lower operating expenses.By developing smart agriculture techniques, more dependable and effective energy management in the agricultural sector is made possible by the results.
{"title":"Leveraging AI in cloud computing to enhance nano grid operations and performance in agriculture","authors":"Kruti Sutariya , C. Menaka , Mohammad Shahid , Sneha Kashyap , Deeksha Choudhary , Sumitra Padmanabhan","doi":"10.1016/j.suscom.2024.101075","DOIUrl":"10.1016/j.suscom.2024.101075","url":null,"abstract":"<div><div>The agricultural industry is critical to guaranteeing food security and sustainability, yet technological improvements have created new opportunities for enhancing farming operations. Nano-grids, or small-scale decentralized energy systems, are a viable response to agriculture's energy challenges.This study aims to investigate the integration of AI technologies into cloud computing frameworks to empower agricultural nano-grids. We propose Dragon Fruit Fly Optimization algorithms (D-FF) for energy management in Nano-grids operations with sustainable farming technology.The proposed approach's efficacy is evaluated using simulations and real-world situations in agricultural environments.The results show that the nano-grid supports agricultural activities as well as improves Accuracy (96 %), F1-Score (93 %), Precision (91 %), and Recall (92 %) with less energy wasted along with lower operating expenses.By developing smart agriculture techniques, more dependable and effective energy management in the agricultural sector is made possible by the results.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"46 ","pages":"Article 101075"},"PeriodicalIF":3.8,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143172631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-23DOI: 10.1016/j.suscom.2024.101054
Aml G. AbdElkader , Hanaa ZainEldin , Mahmoud M. Saafan
Wind energy is a crucial renewable resource that supports sustainable development and reduces carbon emissions. However, accurate wind power forecasting is challenging due to the inherent variability in wind patterns. This paper addresses these challenges by developing and evaluating some machine learning (ML) and deep learning (DL) models to enhance wind power forecasting accuracy. Traditional ML models, including Random Forest, k-nearest Neighbors, Ridge Regression, LASSO, Support Vector Regression, and Elastic Net, are compared with advanced DL models, such as Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Stacked LSTM, Graph Convolutional Networks (GCN), Temporal Convolutional Networks (TCN), and the Informer network, which is well-suited for long-sequence forecasting and large, sparse datasets. Recognizing the complexities of wind power forecasting, such as the need for high-resolution meteorological data and the limitations of ML models like overfitting and computational complexity, a novel hybrid approach is proposed. This approach uses hybrid RNN-LSTM models optimized through GS-CV. The models were trained and validated on a SCADA dataset from a Turkish wind farm, comprising 50,530 instances. Data preprocessing included cleaning, encoding, and normalization, with 70 % of the dataset allocated for training and 30 % for validation. Model performance was evaluated using key metrics such as R², MSE, MAE, RMSE, and MedAE. The proposed hybrid RNN-LSTM Models achieved outstanding results, with the RNN-LSTM model attaining an R² of 99.99 %, significantly outperforming other models. These results demonstrate the effectiveness of the hybrid approach and the Informer network in improving wind power forecasting accuracy, contributing to grid stability, and facilitating the broader adoption of sustainable energy solutions. The proposed model also achieved superior comparable performance when compared to state-of-the-art methods.
{"title":"Optimizing wind power forecasting with RNN-LSTM models through grid search cross-validation","authors":"Aml G. AbdElkader , Hanaa ZainEldin , Mahmoud M. Saafan","doi":"10.1016/j.suscom.2024.101054","DOIUrl":"10.1016/j.suscom.2024.101054","url":null,"abstract":"<div><div>Wind energy is a crucial renewable resource that supports sustainable development and reduces carbon emissions. However, accurate wind power forecasting is challenging due to the inherent variability in wind patterns. This paper addresses these challenges by developing and evaluating some machine learning (ML) and deep learning (DL) models to enhance wind power forecasting accuracy. Traditional ML models, including Random Forest, k-nearest Neighbors, Ridge Regression, LASSO, Support Vector Regression, and Elastic Net, are compared with advanced DL models, such as Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), Stacked LSTM, Graph Convolutional Networks (GCN), Temporal Convolutional Networks (TCN), and the Informer network, which is well-suited for long-sequence forecasting and large, sparse datasets. Recognizing the complexities of wind power forecasting, such as the need for high-resolution meteorological data and the limitations of ML models like overfitting and computational complexity, a novel hybrid approach is proposed. This approach uses hybrid RNN-LSTM models optimized through GS-CV. The models were trained and validated on a SCADA dataset from a Turkish wind farm, comprising 50,530 instances. Data preprocessing included cleaning, encoding, and normalization, with 70 % of the dataset allocated for training and 30 % for validation. Model performance was evaluated using key metrics such as R², MSE, MAE, RMSE, and MedAE. The proposed hybrid RNN-LSTM Models achieved outstanding results, with the RNN-LSTM model attaining an R² of 99.99 %, significantly outperforming other models. These results demonstrate the effectiveness of the hybrid approach and the Informer network in improving wind power forecasting accuracy, contributing to grid stability, and facilitating the broader adoption of sustainable energy solutions. The proposed model also achieved superior comparable performance when compared to state-of-the-art methods.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"45 ","pages":"Article 101054"},"PeriodicalIF":3.8,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142745129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1016/j.suscom.2024.101052
Namita K. Shinde, Vinod H. Patil
There are two main design issues in Wireless Sensor Network (WSN) routing including energy optimization and security provision. Due to the energy limitations of wireless sensor devices, the problem of high usage of energy must be properly addressed to enhance the network efficiency. Several research works have been addressed to solve the routing issue in WSN with security concerns and network life time enhancement. However, the network overhead and routing traffic are some of the obstacles still not tackled by the existing models. Hence, to enhance the routing performance, a new cluster-based routing model is introduced in this work that includes two phases like Cluster Head (CH) selection and Routing. In the first phase, the hybrid optimization model, Tasmanian Integrated Coot Optimization Algorithm (TICOA) is proposed for selecting the optimal CH under the consideration of constraints like security, Energy, Trust, Delay and Distance. Subsequently, the routing process takes place under the constraints of Trust and Link Quality that ensures the enhancement of the network lifetime of WSN. Finally, simulation results show the performance of the proposed work on cluster-based routing in terms of different performance measures. The conventional systems received lower trust ratings, specifically BOA=0.489, BSA=0.475, GA=0.493, TDO=0.418, COOT=0.439, TSGWO=0.427, and P-WWO=0.408, whereas the trust value of the TICOA technique is 0.683.
{"title":"Secured and energy efficient cluster based routing in WSN via hybrid optimization model, TICOA","authors":"Namita K. Shinde, Vinod H. Patil","doi":"10.1016/j.suscom.2024.101052","DOIUrl":"10.1016/j.suscom.2024.101052","url":null,"abstract":"<div><div>There are two main design issues in Wireless Sensor Network (WSN) routing including energy optimization and security provision. Due to the energy limitations of wireless sensor devices, the problem of high usage of energy must be properly addressed to enhance the network efficiency. Several research works have been addressed to solve the routing issue in WSN with security concerns and network life time enhancement. However, the network overhead and routing traffic are some of the obstacles still not tackled by the existing models. Hence, to enhance the routing performance, a new cluster-based routing model is introduced in this work that includes two phases like Cluster Head (CH) selection and Routing. In the first phase, the hybrid optimization model, Tasmanian Integrated Coot Optimization Algorithm (TICOA) is proposed for selecting the optimal CH under the consideration of constraints like security, Energy, Trust, Delay and Distance. Subsequently, the routing process takes place under the constraints of Trust and Link Quality that ensures the enhancement of the network lifetime of WSN. Finally, simulation results show the performance of the proposed work on cluster-based routing in terms of different performance measures. The conventional systems received lower trust ratings, specifically BOA=0.489, BSA=0.475, GA=0.493, TDO=0.418, COOT=0.439, TSGWO=0.427, and P-WWO=0.408, whereas the trust value of the TICOA technique is 0.683.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101052"},"PeriodicalIF":3.8,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.suscom.2024.101053
P. Jagannadha Varma, Srinivasa Rao Bendi
With the rapid development of computing networks, cloud computing (CC) enables the deployment of large-scale applications and meets the increased rate of computational demands. Moreover, task scheduling is an essential process in CC. The tasks must be effectually scheduled across the Virtual Machines (VMs) to increase resource usage and diminish the makespan. In this paper, the multi-objective optimization called Al-Biruni Earth Namib Beetle Optimization (BENBO) with the Bidirectional-Long Short-Term Memory (Bi-LSTM) named as BENBO+ Bi-LSTM is developed for Task scheduling. The user task is subjected to the multi-objective BENBO, in which parameters like makespan, computational cost, reliability, and predicted energy are used to schedule the task. Simultaneously, the user task is fed to Bi-LSTM-based task scheduling, in which the VM parameters like average computation cost, Earliest Starting Time (EST), task priority, and Earliest Finishing Time (EFT) as well as the task parameters like bandwidth and memory capacity are utilized to schedule the task. Moreover, the task scheduling outcomes from the multi-objective BENBO and Bi-LSTM are fused for obtaining the final scheduling with less makespan and resource usage. Moreover, the predicted energy, resource utilization and makespan are considered to validate the BENBO+ Bi-LSTM-based task scheduling, which offered the optimal values of 0.669 J, 0.535 and 0.381.
{"title":"Multiobjective hybrid Al-Biruni Earth Namib Beetle Optimization and deep learning based task scheduling in cloud computing","authors":"P. Jagannadha Varma, Srinivasa Rao Bendi","doi":"10.1016/j.suscom.2024.101053","DOIUrl":"10.1016/j.suscom.2024.101053","url":null,"abstract":"<div><div>With the rapid development of computing networks, cloud computing (CC) enables the deployment of large-scale applications and meets the increased rate of computational demands. Moreover, task scheduling is an essential process in CC. The tasks must be effectually scheduled across the Virtual Machines (VMs) to increase resource usage and diminish the makespan. In this paper, the multi-objective optimization called Al-Biruni Earth Namib Beetle Optimization (BENBO) with the Bidirectional-Long Short-Term Memory (Bi-LSTM) named as BENBO+ Bi-LSTM is developed for Task scheduling. The user task is subjected to the multi-objective BENBO, in which parameters like makespan, computational cost, reliability, and predicted energy are used to schedule the task. Simultaneously, the user task is fed to Bi-LSTM-based task scheduling, in which the VM parameters like average computation cost, Earliest Starting Time (EST), task priority, and Earliest Finishing Time (EFT) as well as the task parameters like bandwidth and memory capacity are utilized to schedule the task. Moreover, the task scheduling outcomes from the multi-objective BENBO and Bi-LSTM are fused for obtaining the final scheduling with less makespan and resource usage. Moreover, the predicted energy, resource utilization and makespan are considered to validate the BENBO+ Bi-LSTM-based task scheduling, which offered the optimal values of 0.669 J, 0.535 and 0.381.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101053"},"PeriodicalIF":3.8,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05DOI: 10.1016/j.suscom.2024.101049
Jose M. Badia , German Leon , Mario Garcia-Valderas , Jose A. Belloch , Almudena Lindoso , Luis Entrena
This study focuses on the low-power Tegra X1 System-on-Chip (SoC) from the Jetson Nano Developer Kit, which is increasingly used in various environments and tasks. As these SoCs grow in prevalence, it becomes crucial to analyse their computational performance, energy consumption, and reliability, especially for safety-critical applications. A key factor examined in this paper is the SoC’s neutron radiation tolerance. This is explored by subjecting a parallel version of matrix multiplication, which has been offloaded to various hardware components via OpenMP, to neutron irradiation. Through this approach, this researcher establishes a correlation between the SoC’s reliability and its computational and energy performance. The analysis enables the identification of an optimal workload distribution strategy, considering factors such as execution time, energy efficiency, and system reliability. Experimental results reveal that, while the GPU executes matrix multiplication tasks more rapidly and efficiently than the CPU, using both components only marginally reduces execution time. Interestingly, GPU usage significantly increases the SoC’s critical section, leading to an escalated error rate for both Detected Unrecoverable Errors (DUE) and Silent Data Corruptions (SDC), with the CPU showing a higher average number of affected elements per SDC.
{"title":"Analysing the radiation reliability, performance and energy consumption of low-power SoC through heterogeneous parallelism","authors":"Jose M. Badia , German Leon , Mario Garcia-Valderas , Jose A. Belloch , Almudena Lindoso , Luis Entrena","doi":"10.1016/j.suscom.2024.101049","DOIUrl":"10.1016/j.suscom.2024.101049","url":null,"abstract":"<div><div>This study focuses on the low-power Tegra X1 System-on-Chip (SoC) from the Jetson Nano Developer Kit, which is increasingly used in various environments and tasks. As these SoCs grow in prevalence, it becomes crucial to analyse their computational performance, energy consumption, and reliability, especially for safety-critical applications. A key factor examined in this paper is the SoC’s neutron radiation tolerance. This is explored by subjecting a parallel version of matrix multiplication, which has been offloaded to various hardware components via OpenMP, to neutron irradiation. Through this approach, this researcher establishes a correlation between the SoC’s reliability and its computational and energy performance. The analysis enables the identification of an optimal workload distribution strategy, considering factors such as execution time, energy efficiency, and system reliability. Experimental results reveal that, while the GPU executes matrix multiplication tasks more rapidly and efficiently than the CPU, using both components only marginally reduces execution time. Interestingly, GPU usage significantly increases the SoC’s critical section, leading to an escalated error rate for both Detected Unrecoverable Errors (DUE) and Silent Data Corruptions (SDC), with the CPU showing a higher average number of affected elements per SDC.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101049"},"PeriodicalIF":3.8,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.suscom.2024.101048
A. Saravanaselvan , B. Paramasivan
Recently, the security-based algorithms for energy-constrained sensor nodes are being developed to consume less energy for computation as well as communication. For the mission critical wireless sensor network (WSN) applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. Therefore, in this manuscript, One-Time Pad Cryptographic Algorithm with Huffman Source Coding Based Energy Aware sensor node Design is proposed (EA-SND-OTPCA-HSC). Before transmission, the distance among transmitter and receiver is rated available in mission critical WSN for lessen communication energy consume of sensor node. For the mission critical WSN applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. The periodic sleep/wake up scheme with Huffman source coding algorithm is used to save energy at the node level. Then, one-time pad cryptographic algorithm in each sensor node, the vernam cipher encryption technique is applied to the compact payload. The proposed technique is executed and efficacy of proposed method is assessed using Payload Vs Energy consume for one sensor node, communication energy consume for one sensor node with different distances, energy consume for one sensor node under various methods, Throughput, delay and Jitter are analyzed. Then the proposed method provides 90.12 %, 89.78 % and 91.78 % lower delay and 88.25 %, 95.34 % and 94.12 % lesser energy consumption comparing to the existing EA-SND-Hyb-MG-CUF, EA-SND-PVEH and EA-SND-PIA techniques respectively.
{"title":"An one-time pad cryptographic algorithm with Huffman Source Coding based energy aware sensor node design","authors":"A. Saravanaselvan , B. Paramasivan","doi":"10.1016/j.suscom.2024.101048","DOIUrl":"10.1016/j.suscom.2024.101048","url":null,"abstract":"<div><div>Recently, the security-based algorithms for energy-constrained sensor nodes are being developed to consume less energy for computation as well as communication. For the mission critical wireless sensor network (WSN) applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. Therefore, in this manuscript, One-Time Pad Cryptographic Algorithm with Huffman Source Coding Based Energy Aware sensor node Design is proposed (EA-SND-OTPCA-HSC). Before transmission, the distance among transmitter and receiver is rated available in mission critical WSN for lessen communication energy consume of sensor node. For the mission critical WSN applications, continuous and secure data collection from WSN nodes is an essential task on the deployed field. The periodic sleep/wake up scheme with Huffman source coding algorithm is used to save energy at the node level. Then, one-time pad cryptographic algorithm in each sensor node, the vernam cipher encryption technique is applied to the compact payload. The proposed technique is executed and efficacy of proposed method is assessed using Payload Vs Energy consume for one sensor node, communication energy consume for one sensor node with different distances, energy consume for one sensor node under various methods, Throughput, delay and Jitter are analyzed. Then the proposed method provides 90.12 %, 89.78 % and 91.78 % lower delay and 88.25 %, 95.34 % and 94.12 % lesser energy consumption comparing to the existing EA-SND-Hyb-MG-CUF, EA-SND-PVEH and EA-SND-PIA techniques respectively.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101048"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.suscom.2024.101050
Vishakha Saurabh Shah, M.S. Ali, Saurabh A. Shah
Power quality is one of the most important fields of energy study in the modern period (PQ). It is important to detect harmonics in the energy as well as any sharp voltage changes. When there are significant or rapid changes in the electrical load, i.e. load variations, it can lead to several issues affecting power quality, including voltage fluctuations, harmonic distortion, frequency variations, and transient disturbances. Estimating load variation is a difficult task. The main aim of this work is to design and develop an Improved Lion Optimization algorithm to tune the CNN classifier. It involves the estimation of the type of load variation. Initially, the time series features are taken from the input data in such a way to find the type of load with enhanced accuracy. To estimate load variation, a Convolutional Neural Network (CNN) is used, and its weights are optimally modified using the Improved Lion Algorithm, a proposed optimization algorithm (ILA). The proposed method was simulated in MATLAB and the result of the ILA-CNN method is generated based on error analysis based on the indices, such as MSRE, RMSE, MAPE, RMSRE, MARE, MAE, RMSPE, and MSE. The proposed work examines load variations ranging from 40×106to 130×106while considering different learning rates of 50 %, 60 %, and 70 %. The result demonstrates that at learning percentage 50, the MAE of the proposed ILA-CNN method is 7.06 %, 62.98 %, 41.13 % and 54.63 % better than the CNN, DF+CNN, PSO+CNN and LA+CNN methods.
{"title":"An optimized deep learning model for estimating load variation type in power quality disturbances","authors":"Vishakha Saurabh Shah, M.S. Ali, Saurabh A. Shah","doi":"10.1016/j.suscom.2024.101050","DOIUrl":"10.1016/j.suscom.2024.101050","url":null,"abstract":"<div><div>Power quality is one of the most important fields of energy study in the modern period (PQ). It is important to detect harmonics in the energy as well as any sharp voltage changes. When there are significant or rapid changes in the electrical load, i.e. load variations, it can lead to several issues affecting power quality, including voltage fluctuations, harmonic distortion, frequency variations, and transient disturbances. Estimating load variation is a difficult task. The main aim of this work is to design and develop an Improved Lion Optimization algorithm to tune the CNN classifier. It involves the estimation of the type of load variation. Initially, the time series features are taken from the input data in such a way to find the type of load with enhanced accuracy. To estimate load variation, a Convolutional Neural Network (CNN) is used, and its weights are optimally modified using the Improved Lion Algorithm, a proposed optimization algorithm (ILA). The proposed method was simulated in MATLAB and the result of the ILA-CNN method is generated based on error analysis based on the indices, such as MSRE, RMSE, MAPE, RMSRE, MARE, MAE, RMSPE, and MSE. The proposed work examines load variations ranging from 40×10<sup>6</sup><span><math><mi>Ω</mi></math></span>to 130×10<sup>6</sup><span><math><mi>Ω</mi></math></span>while considering different learning rates of 50 %, 60 %, and 70 %. The result demonstrates that at learning percentage 50, the MAE of the proposed ILA-CNN method is 7.06 %, 62.98 %, 41.13 % and 54.63 % better than the CNN, DF+CNN, PSO+CNN and LA+CNN methods.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101050"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142661866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Memory wall is known as one of the most critical bottlenecks in processors, rooted in the long memory access delay. With the advent of emerging memory-intensive applications such as image processing, the memory wall problem has become even more critical. Near data processing (NDP) has been introduced as an astonishing solution where instead of moving data from the main memory, instructions are offloaded to the cores integrated with the main memory level. However, in NDP, instructions that are to be offloaded, are statically selected at the compilation time prior to run-time. In addition, NDP ignores the benefit of offloading instructions into the intermediate memory hierarchy levels. We propose Nearest Data Processing (NSDP) which introduces a hierarchical processing approach in GPU. In NSDP, each memory hierarchy level is equipped with processing cores capable of executing instructions. By analyzing the instruction status at run-time, NSDP dynamically decides whether an instruction should be offloaded to the next level of memory hierarchy or be processed at the current level. Depending on the decision, either data is moved upward to the processing core or the instruction is moved downward to the data storage unit. With this approach, the data movement rate has been reduced, on average, by 47 % over the baseline. Consequently, NSDP has been able to improve the system performance, on average, by 37 % and reduce the power consumption, on average, by 18 %.
{"title":"Nearest data processing in GPU","authors":"Hossein Bitalebi , Farshad Safaei , Masoumeh Ebrahimi","doi":"10.1016/j.suscom.2024.101047","DOIUrl":"10.1016/j.suscom.2024.101047","url":null,"abstract":"<div><div>Memory wall is known as one of the most critical bottlenecks in processors, rooted in the long memory access delay. With the advent of emerging memory-intensive applications such as image processing, the memory wall problem has become even more critical. Near data processing (NDP) has been introduced as an astonishing solution where instead of moving data from the main memory, instructions are offloaded to the cores integrated with the main memory level. However, in NDP, instructions that are to be offloaded, are statically selected at the compilation time prior to run-time. In addition, NDP ignores the benefit of offloading instructions into the intermediate memory hierarchy levels. We propose Nearest Data Processing (NSDP) which introduces a hierarchical processing approach in GPU. In NSDP, each memory hierarchy level is equipped with processing cores capable of executing instructions. By analyzing the instruction status at run-time, NSDP dynamically decides whether an instruction should be offloaded to the next level of memory hierarchy or be processed at the current level. Depending on the decision, either data is moved upward to the processing core or the instruction is moved downward to the data storage unit. With this approach, the data movement rate has been reduced, on average, by 47 % over the baseline. Consequently, NSDP has been able to improve the system performance, on average, by 37 % and reduce the power consumption, on average, by 18 %.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101047"},"PeriodicalIF":3.8,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exceptional growth in the penetration of renewable sources as well as complex and variable operating conditions of load demand in power system may jeopardize its operation without an appropriate automatic generation control (AGC) methodology. Hence, an intelligent resilient fractional order fuzzy PID (FOFPID) controlled AGC system is presented in this study. The parameters of controller are tuned utilizing a modified moth swarm algorithm (mMSA) inspired by the movement of moth towards moon light. At first, the effectiveness of the controller is verified on a nonlinear 5-area thermal power system. The simulation outcomes bring out that the suggested controller provides the best performance over the lately published strategies. In the subsequent step, the methodology is extended to a 5-area system having 10-units of power generations, namely thermal, hydro, wind, diesel, gas turbine with 2-units in each area. It is observed that mMSA based FOFPID is more effective related to other approaches. In order to establish the robustness of the controller, a sensitivity examination is executed. Then, experiments are conducted on OPAL-RT based real-time simulation to confirm the feasibility of the method. Finally, mMSA based FOFPID controller is observed superior than the recently published approaches for standard 2-area thermal and IEEE 10 generator 39 bus systems.
{"title":"A mMSA-FOFPID controller for AGC of multi-area power system with multi-type generations","authors":"Dillip Khamari , Rabindra Kumar Sahu , Sidhartha Panda , Yogendra Arya","doi":"10.1016/j.suscom.2024.101046","DOIUrl":"10.1016/j.suscom.2024.101046","url":null,"abstract":"<div><div>The exceptional growth in the penetration of renewable sources as well as complex and variable operating conditions of load demand in power system may jeopardize its operation without an appropriate automatic generation control (AGC) methodology. Hence, an intelligent resilient fractional order fuzzy PID (FOFPID) controlled AGC system is presented in this study. The parameters of controller are tuned utilizing a modified moth swarm algorithm (mMSA) inspired by the movement of moth towards moon light. At first, the effectiveness of the controller is verified on a nonlinear 5-area thermal power system. The simulation outcomes bring out that the suggested controller provides the best performance over the lately published strategies. In the subsequent step, the methodology is extended to a 5-area system having 10-units of power generations, namely thermal, hydro, wind, diesel, gas turbine with 2-units in each area. It is observed that mMSA based FOFPID is more effective related to other approaches. In order to establish the robustness of the controller, a sensitivity examination is executed. Then, experiments are conducted on OPAL-RT based real-time simulation to confirm the feasibility of the method. Finally, mMSA based FOFPID controller is observed superior than the recently published approaches for standard 2-area thermal and IEEE 10 generator 39 bus systems.</div></div>","PeriodicalId":48686,"journal":{"name":"Sustainable Computing-Informatics & Systems","volume":"44 ","pages":"Article 101046"},"PeriodicalIF":3.8,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142662253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}