Sign language recognition (SLR) plays a crucial role in bridging the communication gap between individuals with hearing impairments and the auditory communities. This study explores the use of artificial intelligence (AI) in SLR through a comprehensive bibliometric analysis of 2,720 articles published from 1988 to 2024. Utilizing tools like VOSviewer and CiteSpace, the research uncovers the landscape of publication outputs, influential articles, leading authors, as well as the intellectual framework of current topics and emerging trends. The findings indicate that since the inception of SLR research in 1988, there has been a rapid expansion in the field, particularly from 2004 onwards. China and India lead in research productivity. Keyword and co-citation analyses highlight that Hidden Markov Model, Kinect, and Deep Learning have been focal points at various stages of SLR development, while transfer learning, Bidirectional Long Short-Term Memory, attention mechanisms, and Transformer models represent recent emerging trends. This research offers valuable insights for scholars and practitioners interested in AI-based SLR.
{"title":"Artificial intelligence in sign language recognition: A comprehensive bibliometric and visual analysis","authors":"Yanqiong Zhang , Yu Han , Zhaosong Zhu , Xianwei Jiang , Yudong Zhang","doi":"10.1016/j.compeleceng.2024.109854","DOIUrl":"10.1016/j.compeleceng.2024.109854","url":null,"abstract":"<div><div>Sign language recognition (SLR) plays a crucial role in bridging the communication gap between individuals with hearing impairments and the auditory communities. This study explores the use of artificial intelligence (AI) in SLR through a comprehensive bibliometric analysis of 2,720 articles published from 1988 to 2024. Utilizing tools like VOSviewer and CiteSpace, the research uncovers the landscape of publication outputs, influential articles, leading authors, as well as the intellectual framework of current topics and emerging trends. The findings indicate that since the inception of SLR research in 1988, there has been a rapid expansion in the field, particularly from 2004 onwards. China and India lead in research productivity. Keyword and co-citation analyses highlight that Hidden Markov Model, Kinect, and Deep Learning have been focal points at various stages of SLR development, while transfer learning, Bidirectional Long Short-Term Memory, attention mechanisms, and Transformer models represent recent emerging trends. This research offers valuable insights for scholars and practitioners interested in AI-based SLR.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109854"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.compeleceng.2024.109847
Fangyi Zhao
This paper presents a novel intelligent planning approach to optimize microgrid management with multiple random renewable energy sources. The key contribution is a developed slap algorithm enhanced with chaos theory to prevent local optima and premature convergence. The system incorporates various components—photovoltaic units, wind turbines, fuel cells, micro-turbines, energy storage, electrolysis—and accounts for smart home participation in energy demand response. Using a scenario-based method, it models uncertainties like wind speed, solar radiation, electricity demand, and price. The paper compares batteries and hydrogen storage tanks as energy storage options and validates the algorithm's effectiveness through four cases evaluating hydrogen storage and demand response. Findings demonstrate significant economic benefits and performance improvements in microgrid management by integrating hydrogen storage and load response programs. The study evaluates four cases, comparing systems with and without demand response (DR) and hydrogen storage. The results show that integrating DR and hydrogen storage reduces costs by 12.4% and 23.4%, respectively, compared to the reference model. The paper also presents a comparative analysis of battery and hydrogen storage, highlighting the efficiency and economic benefits of hybrid storage systems. By incorporating stochastic modeling and multi-objective optimization, the proposed approach enhances energy efficiency, reliability, and cost-effectiveness.
{"title":"Optimizing Microgrid Management with Intelligent Planning: A Chaos Theory-Based Salp Swarm Algorithm for Renewable Energy Integration and Demand Response","authors":"Fangyi Zhao","doi":"10.1016/j.compeleceng.2024.109847","DOIUrl":"10.1016/j.compeleceng.2024.109847","url":null,"abstract":"<div><div>This paper presents a novel intelligent planning approach to optimize microgrid management with multiple random renewable energy sources. The key contribution is a developed slap algorithm enhanced with chaos theory to prevent local optima and premature convergence. The system incorporates various components—photovoltaic units, wind turbines, fuel cells, micro-turbines, energy storage, electrolysis—and accounts for smart home participation in energy demand response. Using a scenario-based method, it models uncertainties like wind speed, solar radiation, electricity demand, and price. The paper compares batteries and hydrogen storage tanks as energy storage options and validates the algorithm's effectiveness through four cases evaluating hydrogen storage and demand response. Findings demonstrate significant economic benefits and performance improvements in microgrid management by integrating hydrogen storage and load response programs. The study evaluates four cases, comparing systems with and without demand response (DR) and hydrogen storage. The results show that integrating DR and hydrogen storage reduces costs by 12.4% and 23.4%, respectively, compared to the reference model. The paper also presents a comparative analysis of battery and hydrogen storage, highlighting the efficiency and economic benefits of hybrid storage systems. By incorporating stochastic modeling and multi-objective optimization, the proposed approach enhances energy efficiency, reliability, and cost-effectiveness.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109847"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In previous studies, the behaviour of each vehicle is assumed to be constant in all situations. However, this assumption is unreasonable, as physical factors—such as load conditions or turn times—are not considered in the trajectory planning process, even though they significantly affect vehicle operation. Therefore, an algorithm for determining travel time between two adjacent nodes (A2D2T-A2AN) is proposed to resolve collisions caused by discrepancies between planned time and actual time, which make these collisions unpredictable. Additionally, vehicles can be controlled with varying accelerations depending on load conditions. Furthermore, an algorithm to adjust arrival times when a vehicle approaches a node (A2CAT-VT2N) is developed. To verify the efficiency and feasibility of these algorithms, several experiments were conducted using a chessboard map simulation under different conditions. The results demonstrate that our method is both suitable and effective for real-world applications.
{"title":"Towards sustainable scheduling of a multi-automated guided vehicle system for collision avoidance","authors":"Thanh Phuong Nguyen , Hung Nguyen , Ha Quang Thinh Ngo","doi":"10.1016/j.compeleceng.2024.109824","DOIUrl":"10.1016/j.compeleceng.2024.109824","url":null,"abstract":"<div><div>In previous studies, the behaviour of each vehicle is assumed to be constant in all situations. However, this assumption is unreasonable, as physical factors—such as load conditions or turn times—are not considered in the trajectory planning process, even though they significantly affect vehicle operation. Therefore, an algorithm for determining travel time between two adjacent nodes (A2D2T-A2AN) is proposed to resolve collisions caused by discrepancies between planned time and actual time, which make these collisions unpredictable. Additionally, vehicles can be controlled with varying accelerations depending on load conditions. Furthermore, an algorithm to adjust arrival times when a vehicle approaches a node (A2CAT-VT2N) is developed. To verify the efficiency and feasibility of these algorithms, several experiments were conducted using a chessboard map simulation under different conditions. The results demonstrate that our method is both suitable and effective for real-world applications.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109824"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.compeleceng.2024.109831
Tuncay Eren
Waveform design plays a crucial role in ensuring the seamless operation of next generation wireless networks. Orthogonal frequency division multiplexing (OFDM)-based waveforms are still in the interest and remain a viable candidate for the physical layer of the sixth-generation (6G) networks. Universal filter multicarrier (UFMC) waveform, a variant of OFDM, exhibits several advantages over traditional OFDM, particularly in terms of reduced spectral leakage and increased spectral efficiency. However, it comes with a higher computational complexity compared to the OFDM system, particularly during the subband-wise convolution phase at the IFFT outputs. Therefore, to alleviate some of the complexity challenges in practical applications, it is essential to redesign the algorithms at both the transmitter and receiver sides. In this paper, we propose an architecture for the UFMC transmitter by incorporating a two-stage convolutional filtering approach at the transmitter side to reduce complexity. Through numerical analyses, we demonstrate a significant reduction in complexity-over 80% fewer multiplications — while also achieving a slight improvement in bit error rate (BER) performance compared to the conventional scheme. This proposed architecture presents a promising solution to tackle the complexity challenges encountered in UFMC systems, making them more viable for practical implementation in various communication environments.
{"title":"Novel low-complexity transceiver design for UFMC system with two-stage filtering","authors":"Tuncay Eren","doi":"10.1016/j.compeleceng.2024.109831","DOIUrl":"10.1016/j.compeleceng.2024.109831","url":null,"abstract":"<div><div>Waveform design plays a crucial role in ensuring the seamless operation of next generation wireless networks. Orthogonal frequency division multiplexing (OFDM)-based waveforms are still in the interest and remain a viable candidate for the physical layer of the sixth-generation (6G) networks. Universal filter multicarrier (UFMC) waveform, a variant of OFDM, exhibits several advantages over traditional OFDM, particularly in terms of reduced spectral leakage and increased spectral efficiency. However, it comes with a higher computational complexity compared to the OFDM system, particularly during the subband-wise convolution phase at the IFFT outputs. Therefore, to alleviate some of the complexity challenges in practical applications, it is essential to redesign the algorithms at both the transmitter and receiver sides. In this paper, we propose an architecture for the UFMC transmitter by incorporating a two-stage convolutional filtering approach at the transmitter side to reduce complexity. Through numerical analyses, we demonstrate a significant reduction in complexity-over 80% fewer multiplications — while also achieving a slight improvement in bit error rate (BER) performance compared to the conventional scheme. This proposed architecture presents a promising solution to tackle the complexity challenges encountered in UFMC systems, making them more viable for practical implementation in various communication environments.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109831"},"PeriodicalIF":4.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.compeleceng.2024.109829
Zhiyin Chen , Youliang Tian , Feng Zhou , Wei Xiong , Ze Yang , Shuai Wang
Outsourcing computation is a key technology for optimizing resource utilization and handling complex data tasks, especially when local resources are insufficient. However, service providers may trigger dishonest computation and data leakage due to selfishness, while clients may be reluctant to outsource computation due to high processing costs and malicious behaviour of servers. Addressing these issues, we propose a polynomial computation scheme based on game theory that achieves privacy computation and verifiability. Specifically, we formally construct a traditional two-party computation game model, analyse the benefits and motivations of the participants, and conclude that servers will generate selfish behaviours to break the protocol in order to maximize their benefits, resulting in damage to clients’ interests. Next, we propose a rational two-party polynomial computation protocol for efficient privacy computation between servers, and ensure the correctness of the computation based on a sampling verification technique and a deposit mechanism. Finally, game analysis proves that our scheme effectively constrains the selfish behaviour of service providers and conserves clients’ verification costs. Simulation experiments show that our scheme reduces the computation cost by more than 30% compared to other schemes.
{"title":"A rational and reliable model for outsourcing polynomial two-party computation","authors":"Zhiyin Chen , Youliang Tian , Feng Zhou , Wei Xiong , Ze Yang , Shuai Wang","doi":"10.1016/j.compeleceng.2024.109829","DOIUrl":"10.1016/j.compeleceng.2024.109829","url":null,"abstract":"<div><div>Outsourcing computation is a key technology for optimizing resource utilization and handling complex data tasks, especially when local resources are insufficient. However, service providers may trigger dishonest computation and data leakage due to selfishness, while clients may be reluctant to outsource computation due to high processing costs and malicious behaviour of servers. Addressing these issues, we propose a polynomial computation scheme based on game theory that achieves privacy computation and verifiability. Specifically, we formally construct a traditional two-party computation game model, analyse the benefits and motivations of the participants, and conclude that servers will generate selfish behaviours to break the protocol in order to maximize their benefits, resulting in damage to clients’ interests. Next, we propose a rational two-party polynomial computation protocol for efficient privacy computation between servers, and ensure the correctness of the computation based on a sampling verification technique and a deposit mechanism. Finally, game analysis proves that our scheme effectively constrains the selfish behaviour of service providers and conserves clients’ verification costs. Simulation experiments show that our scheme reduces the computation cost by more than 30% compared to other schemes.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109829"},"PeriodicalIF":4.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1016/j.compeleceng.2024.109839
Samparna Parida, Pawan Kumar, Santos Kumar Das
Cooperative non-orthogonal multiple access (CNOMA) stands out as a promising approach for fostering widespread user connectivity and promoting fair allocation of network resources. For a self-sustainable, low complexity, high-speed, ultra-reliable data transmission we need user cooperative relaying with an optimal and low complexity user selection scheme. This research article models a Wireless power transfer (WPT) enabled user-assisted CNOMA communication network using Nakagami-m channel and compares the outage performance of user nodes at different existing relay selection techniques. A comparative outage performance analysis of several popular user selection systems is conducted through simulation in an effort to identify the best scheme with the least amount of complexity for the proposed communication network. However, in conventional cooperative communication networks optimal performance can never be achieved with low complexity schemes. It also provides a closed-form exact expression for outage probability and throughput at both the near and far user and the energy efficiency of the proposed communication system using the optimal scheme. The expressions derived are validated through Monte Carlo simulation. The outage performance analysis is obtained at the optimal scheme varying different parameters. The performance of the WPT-assisted CNOMA is compared with that of a Simultaneous Wireless Information and Power Transfer (SWIPT) assisted CNOMA using various existing user selection schemes to find the best combination of techniques.
{"title":"Optimal user selection scheme in wireless powered downlink CNOMA communication under Nakagami-m fading channel","authors":"Samparna Parida, Pawan Kumar, Santos Kumar Das","doi":"10.1016/j.compeleceng.2024.109839","DOIUrl":"10.1016/j.compeleceng.2024.109839","url":null,"abstract":"<div><div>Cooperative non-orthogonal multiple access (CNOMA) stands out as a promising approach for fostering widespread user connectivity and promoting fair allocation of network resources. For a self-sustainable, low complexity, high-speed, ultra-reliable data transmission we need user cooperative relaying with an optimal and low complexity user selection scheme. This research article models a Wireless power transfer (WPT) enabled user-assisted CNOMA communication network using Nakagami-m channel and compares the outage performance of user nodes at different existing relay selection techniques. A comparative outage performance analysis of several popular user selection systems is conducted through simulation in an effort to identify the best scheme with the least amount of complexity for the proposed communication network. However, in conventional cooperative communication networks optimal performance can never be achieved with low complexity schemes. It also provides a closed-form exact expression for outage probability and throughput at both the near and far user and the energy efficiency of the proposed communication system using the optimal scheme. The expressions derived are validated through Monte Carlo simulation. The outage performance analysis is obtained at the optimal scheme varying different parameters. The performance of the WPT-assisted CNOMA is compared with that of a Simultaneous Wireless Information and Power Transfer (SWIPT) assisted CNOMA using various existing user selection schemes to find the best combination of techniques.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109839"},"PeriodicalIF":4.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and effective wind power forecasting is crucial for wind power dispatch and wind energy development. However, existing methods often lack adaptive updating capabilities and struggle to handle real-time changing data. This paper proposes a new hybrid wind power forecasting model that integrates the Maximal Information Coefficient (MIC), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), an improved Harris Hawks Optimization (IHHO) algorithm, and an Adaptive Deep Learning model with Online Learning and Forgetting mechanisms (ADL-OLF). First, MIC is used to reconstruct input features, enhancing their correlation with the target variable, and DBSCAN is employed to handle outliers in the dataset. The ADL-OLF model enables continuous updating with new data through online learning and forgetting mechanisms. Its deep learning component consists of Bidirectional Long Short-Term Memory (BiLSTM) networks and self-attention mechanisms, which improve the prediction accuracy for sequential data. Finally, IHHO optimizes the parameters of the ADL-OLF model, achieving strong predictive performance and adaptability to real-time changing data. Experimental simulations based on actual wind power data over four seasons from a U.S. wind farm show that the proposed model achieves a coefficient of determination exceeding 0.99. Compared with 12 benchmark models (taking IHHO-ADL-OLF as an example), the Root Mean Square Error (RMSE) is reduced by more than 20%. These results indicate that the model significantly improves the accuracy and robustness of wind power forecasting, providing valuable references for the development and optimization of wind power systems.
{"title":"An improved hybrid model for wind power forecasting through fusion of deep learning and adaptive online learning","authors":"Xiongfeng Zhao, Hai Peng Liu, Huaiping Jin, Shan Cao, Guangmei Tang","doi":"10.1016/j.compeleceng.2024.109768","DOIUrl":"10.1016/j.compeleceng.2024.109768","url":null,"abstract":"<div><div>Accurate and effective wind power forecasting is crucial for wind power dispatch and wind energy development. However, existing methods often lack adaptive updating capabilities and struggle to handle real-time changing data. This paper proposes a new hybrid wind power forecasting model that integrates the Maximal Information Coefficient (MIC), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), an improved Harris Hawks Optimization (IHHO) algorithm, and an Adaptive Deep Learning model with Online Learning and Forgetting mechanisms (ADL-OLF). First, MIC is used to reconstruct input features, enhancing their correlation with the target variable, and DBSCAN is employed to handle outliers in the dataset. The ADL-OLF model enables continuous updating with new data through online learning and forgetting mechanisms. Its deep learning component consists of Bidirectional Long Short-Term Memory (BiLSTM) networks and self-attention mechanisms, which improve the prediction accuracy for sequential data. Finally, IHHO optimizes the parameters of the ADL-OLF model, achieving strong predictive performance and adaptability to real-time changing data. Experimental simulations based on actual wind power data over four seasons from a U.S. wind farm show that the proposed model achieves a coefficient of determination exceeding 0.99. Compared with 12 benchmark models (taking IHHO-ADL-OLF as an example), the Root Mean Square Error (RMSE) is reduced by more than 20%. These results indicate that the model significantly improves the accuracy and robustness of wind power forecasting, providing valuable references for the development and optimization of wind power systems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109768"},"PeriodicalIF":4.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.compeleceng.2024.109849
Zhicheng Huang , Langyu Xia , Huan Zhang , Fan Liu , Yanming Tu , Zefeng Yang , Wenfu Wei
As one of the most widely used forms of energy, the safety and stability of power systems are crucial to modern society. Grounding grids dissipate current and reduce touch and pace voltage during lightning strikes or fault currents, ensuring the safety of personnel and equipment. However, prolonged submersion in soil causes inevitable corrosion, compromising grounding efficacy by increasing resistance and reducing current dissipation. This deterioration can result in unsafe local potential differences. This study uses Laser-Induced Breakdown Spectroscopy (LIBS) to measure corrosion degrees in grounding grids. Spectral data from samples with varying corrosion extent were collected, with outliers removed using the Local Outlier Factor (LOF) algorithm. Principal Component Analysis (PCA) reduced data dimensionality, revealing clustering in spectral data corresponding to corrosion extent. Three machine learning models were compared: Adaptive Boosting - Backpropagation Neural Network (Adaboost-BP), Support Vector Machine (SVM), and Random Forest (RF). The RF model showed the highest accuracy in predicting corrosion degree (R²=0.9845, MSE=0.0296), outperforming Adaboost-BP and SVM, especially for intermediate corrosion extent. These findings validate the effectiveness and reliability of combining LIBS with machine learning for predicting grounding grid corrosion, providing a theoretical foundation for the safe operation of power systems.
{"title":"Evaluation of grounding grid corrosion extent based on laser-induced breakdown spectroscopy (LIBS) combined with machine learning","authors":"Zhicheng Huang , Langyu Xia , Huan Zhang , Fan Liu , Yanming Tu , Zefeng Yang , Wenfu Wei","doi":"10.1016/j.compeleceng.2024.109849","DOIUrl":"10.1016/j.compeleceng.2024.109849","url":null,"abstract":"<div><div>As one of the most widely used forms of energy, the safety and stability of power systems are crucial to modern society. Grounding grids dissipate current and reduce touch and pace voltage during lightning strikes or fault currents, ensuring the safety of personnel and equipment. However, prolonged submersion in soil causes inevitable corrosion, compromising grounding efficacy by increasing resistance and reducing current dissipation. This deterioration can result in unsafe local potential differences. This study uses Laser-Induced Breakdown Spectroscopy (LIBS) to measure corrosion degrees in grounding grids. Spectral data from samples with varying corrosion extent were collected, with outliers removed using the Local Outlier Factor (LOF) algorithm. Principal Component Analysis (PCA) reduced data dimensionality, revealing clustering in spectral data corresponding to corrosion extent. Three machine learning models were compared: Adaptive Boosting - Backpropagation Neural Network (Adaboost-BP), Support Vector Machine (SVM), and Random Forest (RF). The RF model showed the highest accuracy in predicting corrosion degree (R²=0.9845, MSE=0.0296), outperforming Adaboost-BP and SVM, especially for intermediate corrosion extent. These findings validate the effectiveness and reliability of combining LIBS with machine learning for predicting grounding grid corrosion, providing a theoretical foundation for the safe operation of power systems.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109849"},"PeriodicalIF":4.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.compeleceng.2024.109785
Jiaxi Liu, Bolin Gao, Wei Zhong, Yanbo Lu, Shuo Han
The Intelligent and Connected Vehicle Cloud Control System is a critical approach for achieving high-level autonomous driving. One of the key challenges at the perception level is utilizing multi-source sensory data to create a real-time digital twin of the transportation system. Collaborative perception technology plays a pivotal role in addressing this challenge. However, most prior research has been conducted offline, where the focus has primarily been on comparing ground truth at the sensing timestamp with the algorithm’s predicted perception values. This approach tends to prioritize computational accuracy, neglecting the fact that the physical world continues to evolve during the processing time, which can result in an accuracy drop. As a result, there is a growing consensus that both latency and accuracy must be considered simultaneously for real-time applications, such as digital twins and beyond. To address this gap, we first analyze the comprehensive time delay problem in vehicle-road collaborative perception algorithms and formally define the real-time perception problem within this context. Next, we propose an adaptive optimization strategy for vehicle-road collaborative perception, which accounts for the complexity of the perception environment and the vehicle-road communication pipeline. Our approach dynamically selects the optimal model parameter set based on the perception scenario and real-time communication conditions. Experimental results demonstrate that our strategy enhances real-time performance by 5.8% compared to the best global single-model algorithm and by up to 27.5% compared to the conservative fixed single-model approach.
{"title":"Adaptive optimization strategy and evaluation of vehicle-road collaborative perception algorithm in real-time settings","authors":"Jiaxi Liu, Bolin Gao, Wei Zhong, Yanbo Lu, Shuo Han","doi":"10.1016/j.compeleceng.2024.109785","DOIUrl":"10.1016/j.compeleceng.2024.109785","url":null,"abstract":"<div><div>The Intelligent and Connected Vehicle Cloud Control System is a critical approach for achieving high-level autonomous driving. One of the key challenges at the perception level is utilizing multi-source sensory data to create a real-time digital twin of the transportation system. Collaborative perception technology plays a pivotal role in addressing this challenge. However, most prior research has been conducted offline, where the focus has primarily been on comparing ground truth at the sensing timestamp with the algorithm’s predicted perception values. This approach tends to prioritize computational accuracy, neglecting the fact that the physical world continues to evolve during the processing time, which can result in an accuracy drop. As a result, there is a growing consensus that both latency and accuracy must be considered simultaneously for real-time applications, such as digital twins and beyond. To address this gap, we first analyze the comprehensive time delay problem in vehicle-road collaborative perception algorithms and formally define the real-time perception problem within this context. Next, we propose an adaptive optimization strategy for vehicle-road collaborative perception, which accounts for the complexity of the perception environment and the vehicle-road communication pipeline. Our approach dynamically selects the optimal model parameter set based on the perception scenario and real-time communication conditions. Experimental results demonstrate that our strategy enhances real-time performance by 5.8% compared to the best global single-model algorithm and by up to 27.5% compared to the conservative fixed single-model approach.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109785"},"PeriodicalIF":4.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-09DOI: 10.1016/j.compeleceng.2024.109837
Suriya Sharif, Asadur Rahman
Load-frequency-control (LFC) is employed for frequency stability and balanced power flow among control-areas. Inverters are becoming critical assets in modern power networks with increasing renewable energy. As these inverter-based resources prevail, the system inertia decreases, leading to potential frequency instability problems. Grid-forming (GFM) inverter development and applications are gaining significant attraction because of their ability to maintain quality power-grid operations. GFM inverter, acting as a voltage-source-converter, adjusts its output frequency by contributing a portion of the load change to reduce the frequency deviations. The virtual synchronous generator (VSG) control mechanism for GFM is implemented in this work. A two-area interconnected power system model emulating an enhanced IEEE 9-Bus system is developed and simulated in MATLAB-Simulink® for analysis. Secondary controllers are applied in each LFC and GFM-loop of the proposed LFC-GFM system with a magnetotactic-bacteria-optimization (MBO) algorithm for simultaneously tuning these parameters. The proposed control strategy’s effectiveness is verified by comparing simulation results with the basic LFC system under varying inverter penetration levels. The simulated dynamic responses verify the efficacy of the controlled penetration of the GFM inverter in the proposed LFC-GFM system with enhanced damping characteristics, improving small-signal stability and reducing settling time by 43.46% and frequency deviation by 55.5%.
{"title":"Penetration and control of grid-forming (GFM) inverter in LFC of an enhanced IEEE 9-Bus interconnected power system","authors":"Suriya Sharif, Asadur Rahman","doi":"10.1016/j.compeleceng.2024.109837","DOIUrl":"10.1016/j.compeleceng.2024.109837","url":null,"abstract":"<div><div>Load-frequency-control (LFC) is employed for frequency stability and balanced power flow among control-areas. Inverters are becoming critical assets in modern power networks with increasing renewable energy. As these inverter-based resources prevail, the system inertia decreases, leading to potential frequency instability problems. Grid-forming (GFM) inverter development and applications are gaining significant attraction because of their ability to maintain quality power-grid operations. GFM inverter, acting as a voltage-source-converter, adjusts its output frequency by contributing a portion of the load change to reduce the frequency deviations. The virtual synchronous generator (VSG) control mechanism for GFM is implemented in this work. A two-area interconnected power system model emulating an enhanced IEEE 9-Bus system is developed and simulated in MATLAB-Simulink® for analysis. Secondary controllers are applied in each LFC and GFM-loop of the proposed LFC-GFM system with a magnetotactic-bacteria-optimization (MBO) algorithm for simultaneously tuning these parameters. The proposed control strategy’s effectiveness is verified by comparing simulation results with the basic LFC system under varying inverter penetration levels. The simulated dynamic responses verify the efficacy of the controlled penetration of the GFM inverter in the proposed LFC-GFM system with enhanced damping characteristics, improving small-signal stability and reducing settling time by 43.46% and frequency deviation by 55.5%.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109837"},"PeriodicalIF":4.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}