Pub Date : 2024-10-01DOI: 10.1016/j.compeleceng.2024.109726
Insider threats, characterized by their baleful impact and substantial costs, arise from internal factors within organizations. These threats are rare and usually unnoticed, as the malicious actions are often submerged in numerous normal activities, causing dataset imbalance and making detection hard. To address these challenges, in this paper we propose a Two-Step Insider Threat Detection (TSITD) approach. First, it preprocesses the CERT r4.2 and r5.2 datasets into day-long sequences. Second, it handles the dataset imbalance and detects threats by forming various combinations of sampling techniques and classifiers, referred to as TSITD models. When we compare these TSITD models to baseline models, we observe a significant improvement in anomaly detection rate and balanced accuracy. The TSITD models also achieve higher rankings when evaluated using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method.
{"title":"Handling imbalance dataset issue in insider threat detection using machine learning methods","authors":"","doi":"10.1016/j.compeleceng.2024.109726","DOIUrl":"10.1016/j.compeleceng.2024.109726","url":null,"abstract":"<div><div>Insider threats, characterized by their baleful impact and substantial costs, arise from internal factors within organizations. These threats are rare and usually unnoticed, as the malicious actions are often submerged in numerous normal activities, causing dataset imbalance and making detection hard. To address these challenges, in this paper we propose a Two-Step Insider Threat Detection (TSITD) approach. First, it preprocesses the CERT r4.2 and r5.2 datasets into day-long sequences. Second, it handles the dataset imbalance and detects threats by forming various combinations of sampling techniques and classifiers, referred to as TSITD models. When we compare these TSITD models to baseline models, we observe a significant improvement in anomaly detection rate and balanced accuracy. The TSITD models also achieve higher rankings when evaluated using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.compeleceng.2024.109749
This paper proposes Trustworthy Fog, a novel reputation-based consensus method for Internet of Things (IoT) systems that leverages blockchain and fog computing technologies. By integrating fog computing’s near-end data processing capabilities with blockchain’s immutability and transparency, the proposed method addresses challenges related to latency, device load, and the adaptability of traditional consensus algorithms to resource-constrained environments. A reputation management module evaluates device and node behaviors, facilitating rapid authentication and consensus processes. Distinct reputation calculation schemes for physical devices and fog nodes aim to prevent reputation centralization through periodic resets of reputation values. Based on these values, a lightweight consensus algorithm balances computational capacity and reputation to select leader nodes. Simulations demonstrate the method’s effectiveness in dynamically reflecting device trustworthiness and ensuring fair consensus participation. This research advances IoT blockchain technology, offering a robust solution for the scalability and security challenges inherent in IoT networks.
{"title":"Trustworthy fog: A reputation-based consensus method for IoT with blockchain and fog computing","authors":"","doi":"10.1016/j.compeleceng.2024.109749","DOIUrl":"10.1016/j.compeleceng.2024.109749","url":null,"abstract":"<div><div>This paper proposes Trustworthy Fog, a novel reputation-based consensus method for Internet of Things (IoT) systems that leverages blockchain and fog computing technologies. By integrating fog computing’s near-end data processing capabilities with blockchain’s immutability and transparency, the proposed method addresses challenges related to latency, device load, and the adaptability of traditional consensus algorithms to resource-constrained environments. A reputation management module evaluates device and node behaviors, facilitating rapid authentication and consensus processes. Distinct reputation calculation schemes for physical devices and fog nodes aim to prevent reputation centralization through periodic resets of reputation values. Based on these values, a lightweight consensus algorithm balances computational capacity and reputation to select leader nodes. Simulations demonstrate the method’s effectiveness in dynamically reflecting device trustworthiness and ensuring fair consensus participation. This research advances IoT blockchain technology, offering a robust solution for the scalability and security challenges inherent in IoT networks.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.compeleceng.2024.109733
This paper aims to improve the customer-oriented reliability indices (CORIs), load-oriented reliability indices (LORIs), and the security of the electric distribution system (EDS). This is achieved through the optimal placement and sizing of wind turbine distributed generators (WTDGs) and superconducting magnetic energy storages (SMESs), which incorporate DSTATCOM functionality. The LORIs include energy not supplied (ENS) and average energy not supplied (AENS), while the CORIs consist of the system average interruption frequency index (SAIFI), system average interruption duration index (SAIDI), average service unavailability index (ASUI), and customer average interruption duration index (CAIDI). The network security index (NSI), which assesses the risk of current flow in lines approaching critical levels, is also examined. A multi-objective function based on optimized weight factors is developed to simultaneously reduce NSI, ASUI, ENS, SAIDI, and SAIFI using an enhanced walrus optimization algorithm (EWaOA) along with sensitivity factors analysis. This optimizer is an improved form of the traditional Walrus Optimization Algorithm (WaOA), designed to balance exploration and exploitation stages better, thereby avoiding local optima and improving overall performance. The EWaOA algorithm's effectiveness is tested on seven benchmark functions and compared with the conventional WaOA and other recent algorithms. The paper also explores the discharge as well as charging real power in addition to initially SOC of SMESs. The proposed method is applied to the IEEE 33-bus EDS, considering a mixed time-varying voltage-dependent (TVVD) load model. The results indicate that the optimal integration of WTDGs and SMESs with DSTATCOM functionality significantly enhances the reliability and security of the tested EDS.
{"title":"Reliability and security improvement of distribution system using optimal integration of WTDGs and SMESs considering DSTATCOM functionality based on an enhanced walrus optimization algorithm","authors":"","doi":"10.1016/j.compeleceng.2024.109733","DOIUrl":"10.1016/j.compeleceng.2024.109733","url":null,"abstract":"<div><div>This paper aims to improve the customer-oriented reliability indices (CORIs), load-oriented reliability indices (LORIs), and the security of the electric distribution system (EDS). This is achieved through the optimal placement and sizing of wind turbine distributed generators (WTDGs) and superconducting magnetic energy storages (SMESs), which incorporate DSTATCOM functionality. The LORIs include energy not supplied (ENS) and average energy not supplied (AENS), while the CORIs consist of the system average interruption frequency index (SAIFI), system average interruption duration index (SAIDI), average service unavailability index (ASUI), and customer average interruption duration index (CAIDI). The network security index (NSI), which assesses the risk of current flow in lines approaching critical levels, is also examined. A multi-objective function based on optimized weight factors is developed to simultaneously reduce NSI, ASUI, ENS, SAIDI, and SAIFI using an enhanced walrus optimization algorithm (EWaOA) along with sensitivity factors analysis. This optimizer is an improved form of the traditional Walrus Optimization Algorithm (WaOA), designed to balance exploration and exploitation stages better, thereby avoiding local optima and improving overall performance. The EWaOA algorithm's effectiveness is tested on seven benchmark functions and compared with the conventional WaOA and other recent algorithms. The paper also explores the discharge as well as charging real power in addition to initially SOC of SMESs. The proposed method is applied to the IEEE 33-bus EDS, considering a mixed time-varying voltage-dependent (TVVD) load model. The results indicate that the optimal integration of WTDGs and SMESs with DSTATCOM functionality significantly enhances the reliability and security of the tested EDS.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01DOI: 10.1016/j.compeleceng.2024.109719
Today, deep learning algorithms are playing a very crucial role in the initial stage diagnosis of several fundus diseases like glaucoma, hypertension, and diabetic retinopathy. Lots of research is going on in this area now-a-days. During the acquisition of fundus images some artificial spots (e.g. because of the device itself, dust particles in the surroundings) have been added in captured images. In this paper, artificial spots in the fundus images that are generated due to the non-standardized conditions of scanning devices, are detected with the help of a newly proposed modified UNet (mU-Net) semantic segmentation model. Initially, preprocessing methods such as Gaussian blur, thresholding, and Hough transform have been used to create artificial spots. Now these preprocessed images have been used for training the proposed model. To make the proposed model more effective, the following modifications like, Regularization techniques (early stopping, greater weight decay, and Adam optimizer), decay learning rate scheduler, categorical cross-entropy loss function, and a significant number of filters have been modified in simple U-Net model. Apart from these mentioned modifications in the base U-Net model, a feature injecting module (FIM) has been added between the expansion and the contraction section of the simple U-Net model. FIM adds the features of the input image at the time of up-sampling. The addition of FIM to a simple U-Net model improves the detection of artificial spots and enhances the performance of the model. The mU-Net has been compared with other models, namely simple U-Net, V-Net, UNet++, ResUnet-a, WideU-Net, and Swin-Unet. The Friedman test that has been conducted on IOU, DICE, MAE, PSNR, and SSIM scores, found that mU-Net balances evaluation metrics well. It appears that the nonparametric Friedman test will improve reproducibility by demonstrating statistical significance. The IOU, DICE, MAE, PSNR, and SSIM scores of the proposed model indicate superior performance as compared to other models.
{"title":"Detection of artificial spots in fundus images using modified U-Net based semantic segmentation","authors":"","doi":"10.1016/j.compeleceng.2024.109719","DOIUrl":"10.1016/j.compeleceng.2024.109719","url":null,"abstract":"<div><div>Today, deep learning algorithms are playing a very crucial role in the initial stage diagnosis of several fundus diseases like glaucoma, hypertension, and diabetic retinopathy. Lots of research is going on in this area now-a-days. During the acquisition of fundus images some artificial spots (e.g. because of the device itself, dust particles in the surroundings) have been added in captured images. In this paper, artificial spots in the fundus images that are generated due to the non-standardized conditions of scanning devices, are detected with the help of a newly proposed modified UNet (mU-Net) semantic segmentation model. Initially, preprocessing methods such as Gaussian blur, thresholding, and Hough transform have been used to create artificial spots. Now these preprocessed images have been used for training the proposed model. To make the proposed model more effective, the following modifications like, Regularization techniques (early stopping, greater weight decay, and Adam optimizer), decay learning rate scheduler, categorical cross-entropy loss function, and a significant number of filters have been modified in simple U-Net model. Apart from these mentioned modifications in the base U-Net model, a feature injecting module (FIM) has been added between the expansion and the contraction section of the simple U-Net model. FIM adds the features of the input image at the time of up-sampling. The addition of FIM to a simple U-Net model improves the detection of artificial spots and enhances the performance of the model. The mU-Net has been compared with other models, namely simple U-Net, V-Net, UNet++, ResUnet-a, WideU-Net, and Swin-Unet. The Friedman test that has been conducted on IOU, DICE, MAE, PSNR, and SSIM scores, found that mU-Net balances evaluation metrics well. It appears that the nonparametric Friedman test will improve reproducibility by demonstrating statistical significance. The IOU, DICE, MAE, PSNR, and SSIM scores of the proposed model indicate superior performance as compared to other models.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109740
Sudden cardiac death (SCD) is a devastating cardiovascular condition that occurs suddenly within 1 hour of onset, usually without warning. The primary cause is a disruption in the heart's electrical system, leading to the cessation of blood flow and oxygen delivery to vital organs. Despite medical advancements, SCD prognosis remains poor, necessitating risk identification for lifesaving interventions. Hence, in this study, we analyse the morphological changes in electrocardiogram (ECG) signals associated with various cardiac conditions, including SCD and other conditions that can lead to SCD development. The ECG signals were pre-processed using a two-stage filter technique involving wavelet transform (WT) and progressive switching mean filter (PSMF) to eliminate noise and outliers. The denoised signals were then segmented and utilized for extracting temporal and amplitude features related to the P-wave, QRS complex, and T-wave components. These extracted features are further refined and given to the novel Ensemble Growing (EG) technique, which enhances the classification accuracy of different cardiac conditions. Examination of experimental findings revealed that the temporal features play an important role in the development of SCD. In particular, the prolonged durations of t_P-wave, t_QRS complex, t_T-wave, t_PpRp, t_RpSp, t_RpTp, t_PpQp, t_PpSp,t_PpTp, t_QpSp, and t_QpTp are closely associated with SCD. Furthermore, by incorporating significant temporal and amplitude features along with EG technique, produced an impressive SCD prediction accuracy of 99.82 % for 1 hour before its onset. This method offers advantages, including efficient handling of multiple cardiac conditions and real-time predictions, representing a major advancement towards proactive cardiac care and early SCD prediction.
{"title":"A pioneering approach for early prediction of sudden cardiac death via morphological ECG features measurement and ensemble growing techniques","authors":"","doi":"10.1016/j.compeleceng.2024.109740","DOIUrl":"10.1016/j.compeleceng.2024.109740","url":null,"abstract":"<div><div>Sudden cardiac death (SCD) is a devastating cardiovascular condition that occurs suddenly within 1 hour of onset, usually without warning. The primary cause is a disruption in the heart's electrical system, leading to the cessation of blood flow and oxygen delivery to vital organs. Despite medical advancements, SCD prognosis remains poor, necessitating risk identification for lifesaving interventions. Hence, in this study, we analyse the morphological changes in electrocardiogram (ECG) signals associated with various cardiac conditions, including SCD and other conditions that can lead to SCD development. The ECG signals were pre-processed using a two-stage filter technique involving wavelet transform (WT) and progressive switching mean filter (PSMF) to eliminate noise and outliers. The denoised signals were then segmented and utilized for extracting temporal and amplitude features related to the P-wave, QRS complex, and T-wave components. These extracted features are further refined and given to the novel Ensemble Growing (EG) technique, which enhances the classification accuracy of different cardiac conditions. Examination of experimental findings revealed that the temporal features play an important role in the development of SCD. In particular, the prolonged durations of t_P-wave, t_QRS complex, t_T-wave, t_PpRp, t_RpSp, t_RpTp, t_PpQp, t_PpSp,t_PpTp, t_QpSp, and t_QpTp are closely associated with SCD. Furthermore, by incorporating significant temporal and amplitude features along with EG technique, produced an impressive SCD prediction accuracy of 99.82 % for 1 hour before its onset. This method offers advantages, including efficient handling of multiple cardiac conditions and real-time predictions, representing a major advancement towards proactive cardiac care and early SCD prediction.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109730
This work proposes an application of unsupervised deep learning (DL) on 2-D images containing VI diagrams of measured railway pantograph quantities to find patterns in operating conditions (OCs) and waveform distortion. Measurement data consist of pantograph voltage and current measurements from a Swiss 15 kV 16.7 Hz commercial locomotive and a French 2x25 kV 50 Hz test-dedicated locomotive, containing more than 4000 records of 5-cycle snippets for each system. The variational autoencoder (VAE), followed by feature clustering, finds patterns in the input data. Each cluster captures patterns from the VI diagrams, which contain information from current and voltage waveshapes and sub-second variations. The time-domain admittance allows inference about the rolling stock (RS) operation and the waveform distortion spectra, including harmonics and supraharmonics characteristics from both RS and traction supply. The VAE successfully performs data embedding using only 16 channels in the latent space. The effectiveness of the method is quantified by means of the mean square reconstruction error (never larger than 1.5% and equal to 0.31% and 0.33% on average for the Swiss and French case, respectively). The t-SNE visualization confirms that overlapping of clusters is negligible, with a percentage of “misplaced” cluster points of 2.18% and 2.50%, again for the Swiss and French case, respectively. The computation time for the VAE prediction could be brought to some tens of ms representing a performance reference for future implementations. The proposed VI diagram assessment covers emissions for different OCs, rapid changes in power supply conditions, and background distortion caused by other trains on the same line, including line and impedance changes due to the moving load. In this perspective physical justification is found by domain knowledge integration for the identified clusters. A concluding discussion regarding advantages, limitations, and potential improvements or diversification is also included.
{"title":"Data-driven assessment of VI diagrams for inference on pantograph quantities waveform distortion in AC railways","authors":"","doi":"10.1016/j.compeleceng.2024.109730","DOIUrl":"10.1016/j.compeleceng.2024.109730","url":null,"abstract":"<div><div>This work proposes an application of unsupervised deep learning (DL) on 2-D images containing VI diagrams of measured railway pantograph quantities to find patterns in operating conditions (OCs) and waveform distortion. Measurement data consist of pantograph voltage and current measurements from a Swiss 15 kV 16.7 Hz commercial locomotive and a French 2x25 kV 50 Hz test-dedicated locomotive, containing more than 4000 records of 5-cycle snippets for each system. The variational autoencoder (VAE), followed by feature clustering, finds patterns in the input data. Each cluster captures patterns from the VI diagrams, which contain information from current and voltage waveshapes and sub-second variations. The time-domain admittance allows inference about the rolling stock (RS) operation and the waveform distortion spectra, including harmonics and supraharmonics characteristics from both RS and traction supply. The VAE successfully performs data embedding using only 16 channels in the latent space. The effectiveness of the method is quantified by means of the mean square reconstruction error (never larger than 1.5% and equal to 0.31% and 0.33% on average for the Swiss and French case, respectively). The t-SNE visualization confirms that overlapping of clusters is negligible, with a percentage of “misplaced” cluster points of 2.18% and 2.50%, again for the Swiss and French case, respectively. The computation time for the VAE prediction could be brought to some tens of ms representing a performance reference for future implementations. The proposed VI diagram assessment covers emissions for different OCs, rapid changes in power supply conditions, and background distortion caused by other trains on the same line, including line and impedance changes due to the moving load. In this perspective physical justification is found by domain knowledge integration for the identified clusters. A concluding discussion regarding advantages, limitations, and potential improvements or diversification is also included.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142419096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109732
Managing dispersed generation via virtual power plants (VPPs) is crucial for maximizing profits in electricity markets. This paper presents a model aimed at maximizing VPP profit through participation in the energy market. The proposed model addresses grid and security constraints of units using deterministic programming, formulated as an equilibrium-constrained, two-level mathematical optimization model. The first level focuses on maximizing VPP profit, while the second optimizes social welfare. Applying duality theory transforms this two-level model into a mixed-integer linear programming model, further refined using Karush–Kuhn–Tucker (KKT) optimality conditions. Given the inherent conflict in these objectives, a novel algorithm employing water flow dynamics is proposed for solving the model. To enhance method performance, the Pareto criterion and fuzzy decision-making are incorporated. Model tests are conducted on a standard 24-bus IEEE grid, demonstrating its efficiency. For the single-objective problem without line congestion, the solving time was 12 s. Introducing line congestion increased the profit by 13.4 %, from $40,413.21 to $45,837.32. In the two-objective problem without congestion, the profit ranged between $36,928.72 and $42,813.28, and emissions ranged from 275.21 to 2,916.32 pounds. With congestion, the profit range increased by a maximum of 8.7 %, and emissions were reduced by up to 4.6 %.
{"title":"Maximizing virtual power plant profit: A two-level optimization model for energy market participation","authors":"","doi":"10.1016/j.compeleceng.2024.109732","DOIUrl":"10.1016/j.compeleceng.2024.109732","url":null,"abstract":"<div><div>Managing dispersed generation via virtual power plants (VPPs) is crucial for maximizing profits in electricity markets. This paper presents a model aimed at maximizing VPP profit through participation in the energy market. The proposed model addresses grid and security constraints of units using deterministic programming, formulated as an equilibrium-constrained, two-level mathematical optimization model. The first level focuses on maximizing VPP profit, while the second optimizes social welfare. Applying duality theory transforms this two-level model into a mixed-integer linear programming model, further refined using Karush–Kuhn–Tucker (KKT) optimality conditions. Given the inherent conflict in these objectives, a novel algorithm employing water flow dynamics is proposed for solving the model. To enhance method performance, the Pareto criterion and fuzzy decision-making are incorporated. Model tests are conducted on a standard 24-bus IEEE grid, demonstrating its efficiency. For the single-objective problem without line congestion, the solving time was 12 s. Introducing line congestion increased the profit by 13.4 %, from $40,413.21 to $45,837.32. In the two-objective problem without congestion, the profit ranged between $36,928.72 and $42,813.28, and emissions ranged from 275.21 to 2,916.32 pounds. With congestion, the profit range increased by a maximum of 8.7 %, and emissions were reduced by up to 4.6 %.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109709
The ongoing evolution of the Metaverse, digital twin (DT), artificial intelligence (AI), and Blockchain technologies is fundamentally transforming the utilization of sustainable energy resources (SERs) within smart grids (SGs), ushering in the era of smart grid 3.0 (SG 3.0). This paradigm shift presents unprecedented opportunities to establish robust and sustainable energy trading frameworks between consumers and utilities within the SG 3.0 environment. The integration of DT, AI, and Blockchain technologies in the context of the Metaverse’s smart grids, facilitated by optimized communication protocols, represents a transformative approach. AI models play a pivotal role in predictive energy trading analysis, while DT assumes a crucial role in effectively managing diverse SERs within SGs. Simultaneously, Blockchain technology holds the potential to create a trusted and decentralized environment, laying the foundation for an energy trading system within the SG 3.0 universe of the Metaverse. This visionary convergence of DT and Blockchain sets the stage for a futuristic paradigm in SGs, establishing a robust energy trading and management foundation. This paper embarks on exploring uncharted research paths and unlocking new dimensions by surveying and delving into the convergence of the Metaverse, Blockchain, AI, and DT. Its primary objective is to drive the digitization of SERs within SGs, to revolutionize energy systems. The envisioned SG 3.0 represents a significant leap by amalgamating cutting-edge technologies in sustainable energy, paving the way for revolutionary advancements in SGs’ digitization, which aligns with and contributes to achieving the sustainable development goals outlined by the United Nations.
{"title":"Exploring the convergence of Metaverse, Blockchain, Artificial Intelligence, and digital twin for pioneering the digitization in the envision smart grid 3.0","authors":"","doi":"10.1016/j.compeleceng.2024.109709","DOIUrl":"10.1016/j.compeleceng.2024.109709","url":null,"abstract":"<div><div>The ongoing evolution of the Metaverse, digital twin (DT), artificial intelligence (AI), and Blockchain technologies is fundamentally transforming the utilization of sustainable energy resources (SERs) within smart grids (SGs), ushering in the era of smart grid 3.0 (SG 3.0). This paradigm shift presents unprecedented opportunities to establish robust and sustainable energy trading frameworks between consumers and utilities within the SG 3.0 environment. The integration of DT, AI, and Blockchain technologies in the context of the Metaverse’s smart grids, facilitated by optimized communication protocols, represents a transformative approach. AI models play a pivotal role in predictive energy trading analysis, while DT assumes a crucial role in effectively managing diverse SERs within SGs. Simultaneously, Blockchain technology holds the potential to create a trusted and decentralized environment, laying the foundation for an energy trading system within the SG 3.0 universe of the Metaverse. This visionary convergence of DT and Blockchain sets the stage for a futuristic paradigm in SGs, establishing a robust energy trading and management foundation. This paper embarks on exploring uncharted research paths and unlocking new dimensions by surveying and delving into the convergence of the Metaverse, Blockchain, AI, and DT. Its primary objective is to drive the digitization of SERs within SGs, to revolutionize energy systems. The envisioned SG 3.0 represents a significant leap by amalgamating cutting-edge technologies in sustainable energy, paving the way for revolutionary advancements in SGs’ digitization, which aligns with and contributes to achieving the sustainable development goals outlined by the United Nations.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142418984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109647
FACTS based devices are mostly used in power systems due to their capacity to improve system stability. The Static Synchronous Compensator (STATCOM), a shunt-connected device in the FACTS device family, is used for power compensation, power balancing, and enhancing dynamic stability in contemporary power systems. In the proposed work, STATCOM based novel nine level switched capacitor based multi-level inverter (MLI) is used to reduce power quality issues. The proposed novel inverter has a few number of switches with one voltage source. The placement of STATCOM based inverters in the wrong places may have detrimental effects on power loss and electricity quality. To overcome these issues, green anaconda optimization (GAO) is used to allocate the proposed system in an optimal place. The performance of the proposed inverter is examined using the MATLAB/Simulink tool. This inverter requires fewer switches and achieves DC link voltage balances in comparison to any other existing conventional topologies. STATCOM based inverter is validated under different faulty conditions to show the effectiveness of this inverter. The proposed new inverter has nine levels and compensates for the poor state while improving power quality. The proposed method has a substantially lower total harmonic distortion (THD) of 1.04 % while maintaining an efficiency of 99.02 %. Experimental validation is established using a dSPACE RTI1104 controller to validate the proposed method. Experimental results obtained 1.04 % of harmonics with the same output voltage as the simulation.
{"title":"Novel nine level switched capacitor multi-level inverter based STATCOM for distribution system","authors":"","doi":"10.1016/j.compeleceng.2024.109647","DOIUrl":"10.1016/j.compeleceng.2024.109647","url":null,"abstract":"<div><div>FACTS based devices are mostly used in power systems due to their capacity to improve system stability. The Static Synchronous Compensator (STATCOM), a shunt-connected device in the FACTS device family, is used for power compensation, power balancing, and enhancing dynamic stability in contemporary power systems. In the proposed work, STATCOM based novel nine level switched capacitor based multi-level inverter (MLI) is used to reduce power quality issues. The proposed novel inverter has a few number of switches with one voltage source. The placement of STATCOM based inverters in the wrong places may have detrimental effects on power loss and electricity quality. To overcome these issues, green anaconda optimization (GAO) is used to allocate the proposed system in an optimal place. The performance of the proposed inverter is examined using the MATLAB/Simulink tool. This inverter requires fewer switches and achieves DC link voltage balances in comparison to any other existing conventional topologies. STATCOM based inverter is validated under different faulty conditions to show the effectiveness of this inverter. The proposed new inverter has nine levels and compensates for the poor state while improving power quality. The proposed method has a substantially lower total harmonic distortion (THD) of 1.04 % while maintaining an efficiency of 99.02 %. Experimental validation is established using a dSPACE RTI1104 controller to validate the proposed method. Experimental results obtained 1.04 % of harmonics with the same output voltage as the simulation.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142419169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1016/j.compeleceng.2024.109737
With the rapid development of the Internet of Things (IoT), terminal-edge-cloud collaborative computing (TECC), a hierarchical distributed computing model, has become an effective solution to meet the various application requirements in terms of high computing power, high storage capacity, and low-latency services. However, the TECC still faces many challenges in terms of data security and privacy protection. Integrating blockchain into TECC frameworks can enable trustful data exchange between nodes while ensuring the integrity and availability of data. This paper presents an in-depth survey of blockchain-based TECC technology. In particular, we first introduce the key of the TECC paradigm and blockchain briefly. Then we focus on the integration of blockchain into the TECC paradigm. Specifically, the paper divides the TECC frameworks into three categories: one-layer, two-layers and multiple layers. Moreover, this paper summarizes core technologies in blockchain-based TECC architectures, TECC based on lightweight blockchains, and summarizes the application scenarios from different perspectives. Finally, the paper outlines the future direction of blockchain-based TECC.
{"title":"Overview of blockchain-based terminal-edge-cloud collaborative computing paradigm","authors":"","doi":"10.1016/j.compeleceng.2024.109737","DOIUrl":"10.1016/j.compeleceng.2024.109737","url":null,"abstract":"<div><div>With the rapid development of the Internet of Things (IoT), terminal-edge-cloud collaborative computing (TECC), a hierarchical distributed computing model, has become an effective solution to meet the various application requirements in terms of high computing power, high storage capacity, and low-latency services. However, the TECC still faces many challenges in terms of data security and privacy protection. Integrating blockchain into TECC frameworks can enable trustful data exchange between nodes while ensuring the integrity and availability of data. This paper presents an in-depth survey of blockchain-based TECC technology. In particular, we first introduce the key of the TECC paradigm and blockchain briefly. Then we focus on the integration of blockchain into the TECC paradigm. Specifically, the paper divides the TECC frameworks into three categories: one-layer, two-layers and multiple layers. Moreover, this paper summarizes core technologies in blockchain-based TECC architectures, TECC based on lightweight blockchains, and summarizes the application scenarios from different perspectives. Finally, the paper outlines the future direction of blockchain-based TECC.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":null,"pages":null},"PeriodicalIF":4.0,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142444889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}