Pub Date : 2024-11-17DOI: 10.1016/j.compeleceng.2024.109865
Kai Liu , Like Fan , Guangbo Nie , Kai Wang , Bo Gao , Jianmin Fu , Junbin Mu , Guangning Wu
The identification of partial discharge (PD) in cable terminals is crucial for the safe operation of trains. However, the complexity of the operational environment and the similarity of PD signals make defect identification challenging. Consequently, this paper proposes a Time-domain Local Correlation Entropy Image (T-LCEI) transformation method, which constructs an entropy matrix to convert raw PD signals into images. These images embed feature and bandwidth information from the original PD data, significantly enhancing the ability to differentiate between similar PD signals. Furthermore, the method combines a Dual Attention Convolutional Neural Network (DA_CNN) for the effective classification of correlation entropy images. Experimental results demonstrate that this approach achieves an average classification accuracy of 99.69% across four typical PD defect datasets, with a testing accuracy of 97.75% in practical scenarios. Compared to existing PD detection methods, T-LCEI offers significant improvements in effectiveness and discriminability. The integration of DA_CNN further enhances recognition accuracy. The study demonstrates that the proposed method excels in PD defect identification, providing reliable technical support for on-site fault detection and maintenance, thereby significantly improving the operational safety of cable terminals.
{"title":"Time domain correlation entropy image conversion: A new method for fault diagnosis of vehicle-mounted cable terminals","authors":"Kai Liu , Like Fan , Guangbo Nie , Kai Wang , Bo Gao , Jianmin Fu , Junbin Mu , Guangning Wu","doi":"10.1016/j.compeleceng.2024.109865","DOIUrl":"10.1016/j.compeleceng.2024.109865","url":null,"abstract":"<div><div>The identification of partial discharge (PD) in cable terminals is crucial for the safe operation of trains. However, the complexity of the operational environment and the similarity of PD signals make defect identification challenging. Consequently, this paper proposes a Time-domain Local Correlation Entropy Image (T-LCEI) transformation method, which constructs an entropy matrix to convert raw PD signals into images. These images embed feature and bandwidth information from the original PD data, significantly enhancing the ability to differentiate between similar PD signals. Furthermore, the method combines a Dual Attention Convolutional Neural Network (DA_CNN) for the effective classification of correlation entropy images. Experimental results demonstrate that this approach achieves an average classification accuracy of 99.69% across four typical PD defect datasets, with a testing accuracy of 97.75% in practical scenarios. Compared to existing PD detection methods, T-LCEI offers significant improvements in effectiveness and discriminability. The integration of DA_CNN further enhances recognition accuracy. The study demonstrates that the proposed method excels in PD defect identification, providing reliable technical support for on-site fault detection and maintenance, thereby significantly improving the operational safety of cable terminals.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109865"},"PeriodicalIF":4.0,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-17DOI: 10.1016/j.compeleceng.2024.109869
Hamed Danandeh Hesar , Amin Danandeh Hesar
Model-based Bayesian methods for denoising electrocardiogram (ECG) signals have demonstrated promise in preserving ECG morphology and diagnostic properties. These methods are effective for preserving and enhancing the features of ECG signals. However, their performance heavily relies on accurately selecting model parameters, particularly the state and measurement noise covariance matrices. Some of these frameworks also involve computationally intensive computations and loops for state estimation. To address these problems, in this study, we propose a novel approach to improve the performance of several model-based Bayesian frameworks, including the extended Kalman filter/smoother (EKF/EKS), unscented Kalman filter/smoother (UKF/UKS), cubature Kalman filter/smoother (CKF/CKS), and ensemble Kalman filter/smoother (EnKF/EnKS), specifically for ECG denoising tasks. Our methodology dynamically adjusts the state and measurement covariance matrices of the filters using outputs from nonlinear Kalman-based filtering methods. For each filter, we develop a unique approach based on the theoretical foundations of that filter. Additionally, we introduce two distinct strategies for updating these matrices, considering whether the noise in the signals is stationary or nonstationary. Furthermore, we propose a computationally efficient method that significantly reduces the calculation time required for implementing CKF/CKS, UKF/UKS, and EnKF/EnKS frameworks, while maintaining their denoising performance. Our approach can achieve a 50 % reduction in computation time for these frameworks, effectively making them twice as fast as their original implementations We thoroughly evaluated our approach by comparing denoising performance between the original filters and their adaptive versions, as well as against the state-of-the-art marginalized particle extended Kalman filter (MP-EKF). The evaluation utilized various normal ECG segments obtained from different records. The results demonstrate that the adaptive adjustment of covariance matrices significantly improves the denoising performance of nonlinear Kalman-based frameworks in both stationary and non-stationary environments, achieving performance comparable to that of the MP-EKF framework.
{"title":"Efficient Bayesian ECG denoising using adaptive covariance estimation and nonlinear Kalman Filtering","authors":"Hamed Danandeh Hesar , Amin Danandeh Hesar","doi":"10.1016/j.compeleceng.2024.109869","DOIUrl":"10.1016/j.compeleceng.2024.109869","url":null,"abstract":"<div><div>Model-based Bayesian methods for denoising electrocardiogram (ECG) signals have demonstrated promise in preserving ECG morphology and diagnostic properties. These methods are effective for preserving and enhancing the features of ECG signals. However, their performance heavily relies on accurately selecting model parameters, particularly the state and measurement noise covariance matrices. Some of these frameworks also involve computationally intensive computations and loops for state estimation. To address these problems, in this study, we propose a novel approach to improve the performance of several model-based Bayesian frameworks, including the extended Kalman filter/smoother (EKF/EKS), unscented Kalman filter/smoother (UKF/UKS), cubature Kalman filter/smoother (CKF/CKS), and ensemble Kalman filter/smoother (EnKF/EnKS), specifically for ECG denoising tasks. Our methodology dynamically adjusts the state and measurement covariance matrices of the filters using outputs from nonlinear Kalman-based filtering methods. For each filter, we develop a unique approach based on the theoretical foundations of that filter. Additionally, we introduce two distinct strategies for updating these matrices, considering whether the noise in the signals is stationary or nonstationary. Furthermore, we propose a computationally efficient method that significantly reduces the calculation time required for implementing CKF/CKS, UKF/UKS, and EnKF/EnKS frameworks, while maintaining their denoising performance. Our approach can achieve a 50 % reduction in computation time for these frameworks, effectively making them twice as fast as their original implementations We thoroughly evaluated our approach by comparing denoising performance between the original filters and their adaptive versions, as well as against the state-of-the-art marginalized particle extended Kalman filter (MP-EKF). The evaluation utilized various normal ECG segments obtained from different records. The results demonstrate that the adaptive adjustment of covariance matrices significantly improves the denoising performance of nonlinear Kalman-based frameworks in both stationary and non-stationary environments, achieving performance comparable to that of the MP-EKF framework.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109869"},"PeriodicalIF":4.0,"publicationDate":"2024-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-16DOI: 10.1016/j.compeleceng.2024.109850
Puneet Kumar Pal, Dhirendra Kumar
Chaos has practical significance in various domains, including the stock market, quantum physics, communication networks, disease diagnosis, cosmic events, and digital data security. Chaotic maps are widely utilised for encrypting multimedia data for secure communication due to their sensitivity to initial conditions and unpredictability. However, some chaotic maps suffer from weak chaotic dynamics that can make them vulnerable to certain types of attacks, limiting their effectiveness in sensitive applications such as encryption or secure communication in military operations and personal data. This research study proposes a novel nonlinear discrete chaotic map termed a coupled Kaplan–Yorke-Logistic map. By coupling chaotic maps, the Kaplan–Yorke map and the Logistic map, we have significantly enhanced key features such as the length of chaotic orbits, output distribution, and the security of chaotic sequences. An empirical assessment of the proposed coupled Kaplan–Yorke-Logistic map in terms of several measures such as bifurcation diagrams, phase diagrams, Lyapunov exponent analysis, permutation entropy, and sample entropy shows promising ergodicity and a diverse range of hyperchaotic behaviours compared to several recent chaotic maps. Consequently, the proposed map is utilised to develop an efficient image encryption algorithm. The encryption algorithm employs a methodology that utilises simultaneous confusion and diffusion processes aiming to significantly reduce the computation time for encryption and decryption processes for real-time applications without compromising the security parameters. A thorough assessment of the proposed image encryption algorithm is performed on a variety of image datasets by utilising multiple cryptanalysis methods, including key space analysis, information entropy, correlation coefficient evaluation, differential attack, key sensitivity testing, histogram analysis, computational time analysis, and occlusion and noise attacks. Comparative analysis with the state-of-the-art methods indicates the superiority of the proposed algorithm.
{"title":"The coupled Kaplan–Yorke-Logistic map for the image encryption applications","authors":"Puneet Kumar Pal, Dhirendra Kumar","doi":"10.1016/j.compeleceng.2024.109850","DOIUrl":"10.1016/j.compeleceng.2024.109850","url":null,"abstract":"<div><div>Chaos has practical significance in various domains, including the stock market, quantum physics, communication networks, disease diagnosis, cosmic events, and digital data security. Chaotic maps are widely utilised for encrypting multimedia data for secure communication due to their sensitivity to initial conditions and unpredictability. However, some chaotic maps suffer from weak chaotic dynamics that can make them vulnerable to certain types of attacks, limiting their effectiveness in sensitive applications such as encryption or secure communication in military operations and personal data. This research study proposes a novel nonlinear discrete chaotic map termed a coupled Kaplan–Yorke-Logistic map. By coupling chaotic maps, the Kaplan–Yorke map and the Logistic map, we have significantly enhanced key features such as the length of chaotic orbits, output distribution, and the security of chaotic sequences. An empirical assessment of the proposed coupled Kaplan–Yorke-Logistic map in terms of several measures such as bifurcation diagrams, phase diagrams, Lyapunov exponent analysis, permutation entropy, and sample entropy shows promising ergodicity and a diverse range of hyperchaotic behaviours compared to several recent chaotic maps. Consequently, the proposed map is utilised to develop an efficient image encryption algorithm. The encryption algorithm employs a methodology that utilises simultaneous confusion and diffusion processes aiming to significantly reduce the computation time for encryption and decryption processes for real-time applications without compromising the security parameters. A thorough assessment of the proposed image encryption algorithm is performed on a variety of image datasets by utilising multiple cryptanalysis methods, including key space analysis, information entropy, correlation coefficient evaluation, differential attack, key sensitivity testing, histogram analysis, computational time analysis, and occlusion and noise attacks. Comparative analysis with the state-of-the-art methods indicates the superiority of the proposed algorithm.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109850"},"PeriodicalIF":4.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-16DOI: 10.1016/j.compeleceng.2024.109879
Seyed Mohammad Rahimpour , Mohammad Kazemi , Payman Moallem , Mehran Safayani
Video anomaly detection is the identification of outliers deviating from the norm within a series of videos. The spatio-temporal dependencies and unstructured nature of videos make video anomaly detection complicated. Many existing methods cannot detect anomalies accurately because they are unable to learn from the learning data effectively and capture dependencies between distant frames. To this end, we propose a model that uses a pre-trained vision transformer and an ensemble of deep convolutional auto-encoders to capture dependencies between distant frames. Moreover, AdaBoost training is used to ensure the model learns every sample in the data properly. To evaluate the method, we conducted experiments on four publicly available video anomaly detection datasets, namely the CUHK Avenue dataset, ShanghaiTech, UCSD Ped1, and UCSD Ped2, and achieved AUC scores of 93.4 %, 78.8 %, 93.5 %, and 95.7 % for these datasets, respectively. The experimental results demonstrate the flexibility and generalizability of the proposed method for video anomaly detection, coming from robust features extracted by a pre-trained vision transformer and efficient learning of data representations by employing the AdaBoost training strategy.
{"title":"Video anomaly detection using transformers and ensemble of convolutional auto-encoders","authors":"Seyed Mohammad Rahimpour , Mohammad Kazemi , Payman Moallem , Mehran Safayani","doi":"10.1016/j.compeleceng.2024.109879","DOIUrl":"10.1016/j.compeleceng.2024.109879","url":null,"abstract":"<div><div>Video anomaly detection is the identification of outliers deviating from the norm within a series of videos. The spatio-temporal dependencies and unstructured nature of videos make video anomaly detection complicated. Many existing methods cannot detect anomalies accurately because they are unable to learn from the learning data effectively and capture dependencies between distant frames. To this end, we propose a model that uses a pre-trained vision transformer and an ensemble of deep convolutional auto-encoders to capture dependencies between distant frames. Moreover, AdaBoost training is used to ensure the model learns every sample in the data properly. To evaluate the method, we conducted experiments on four publicly available video anomaly detection datasets, namely the CUHK Avenue dataset, ShanghaiTech, UCSD Ped1, and UCSD Ped2, and achieved AUC scores of 93.4 %, 78.8 %, 93.5 %, and 95.7 % for these datasets, respectively. The experimental results demonstrate the flexibility and generalizability of the proposed method for video anomaly detection, coming from robust features extracted by a pre-trained vision transformer and efficient learning of data representations by employing the AdaBoost training strategy.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109879"},"PeriodicalIF":4.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.compeleceng.2024.109851
Hui Tian , James Park , Yidong Li , Haibo Zhang
{"title":"Introduction to the special section on High-Performance Computing (VSI-pdcat6)","authors":"Hui Tian , James Park , Yidong Li , Haibo Zhang","doi":"10.1016/j.compeleceng.2024.109851","DOIUrl":"10.1016/j.compeleceng.2024.109851","url":null,"abstract":"","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109851"},"PeriodicalIF":4.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.compeleceng.2024.109812
Amir Masoud Rahmani , Shtwai Alsubai , Abed Alanazi , Abdullah Alqahtani , Monji Mohamed Zaidi , Mehdi Hosseinzadeh
Mobile Edge Computing (MEC) and Federated Learning (FL) have recently attracted considerable interest for their potential applications across diverse domains. MEC is an architecture for distributed computing that utilizes computational capabilities near the network edge, enabling quicker data processing and minimizing latency. In contrast, FL is a method in the field of Machine learning (ML) that allows for the simultaneous involvement of multiple participants to collectively train models without revealing their raw data, effectively tackling concerns related to security and privacy. This systematic review explores the core principles, architectures, and applications of FL within MEC and vice versa, providing a comprehensive analysis of these technologies. The study emphasizes FL and MEC's unique characteristics, advantages, and drawbacks, highlighting their attributes and limitations. The study explores the complex architectures of both technologies, showcasing the cutting-edge methods and tools employed for their implementation. Aside from examining the foundational principles, the review explores the depths of the internal mechanisms of FL and MEC, offering a valuable in-depth of their architecture understanding and the fundamental principles and processes that facilitate their operation. At last, the concluding remarks and future research directions are provided.
{"title":"The role of mobile edge computing in advancing federated learning algorithms and techniques: A systematic review of applications, challenges, and future directions","authors":"Amir Masoud Rahmani , Shtwai Alsubai , Abed Alanazi , Abdullah Alqahtani , Monji Mohamed Zaidi , Mehdi Hosseinzadeh","doi":"10.1016/j.compeleceng.2024.109812","DOIUrl":"10.1016/j.compeleceng.2024.109812","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) and Federated Learning (FL) have recently attracted considerable interest for their potential applications across diverse domains. MEC is an architecture for distributed computing that utilizes computational capabilities near the network edge, enabling quicker data processing and minimizing latency. In contrast, FL is a method in the field of Machine learning (ML) that allows for the simultaneous involvement of multiple participants to collectively train models without revealing their raw data, effectively tackling concerns related to security and privacy. This systematic review explores the core principles, architectures, and applications of FL within MEC and vice versa, providing a comprehensive analysis of these technologies. The study emphasizes FL and MEC's unique characteristics, advantages, and drawbacks, highlighting their attributes and limitations. The study explores the complex architectures of both technologies, showcasing the cutting-edge methods and tools employed for their implementation. Aside from examining the foundational principles, the review explores the depths of the internal mechanisms of FL and MEC, offering a valuable in-depth of their architecture understanding and the fundamental principles and processes that facilitate their operation. At last, the concluding remarks and future research directions are provided.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109812"},"PeriodicalIF":4.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-15DOI: 10.1016/j.compeleceng.2024.109858
Hamed Abderrahime Bouzid , Mohammed Belkheir , Allel Mokaddem , Mehdi Rouissat , Djamila Ziani
The terahertz (THz) frequency band (0.1–10 THz) has drawn a lot of attention due to the growing demand for greater resolutions, lower latency, faster data rates, and wider bandwidth in 6 G technologies. This range provides data speeds exceeding tens of gigabits per second, large bandwidth, great spectral resolution, and non-ionizing characteristics. THz signals have potential; however they are affected by attenuation, route losses, and atmospheric conditions, necessitating the use of specialised antenna designs. This work presents a 300 GHz rectangular microstrip patch antenna with Graphene as the patch material and Liquid Crystal Polymer (LCP) as the substrate. Photonic band gap (PBG) substrates are used to incorporate cuboid and cylindrical air gaps in square and triangular lattices, hence improving performance. The highest performance is found with cylindrical air gaps in a triangular lattice PBG substrate, which has a bandwidth of 29.56 GHz, a return loss of –48.12 dB, a gain of 10.4 dBi, a directivity of 10.8 dBi, and a radiation efficiency of 91 %. These results establish the proposed antennas as highly effective for broadband and high-speed THz applications, particularly in 6 G systems like advanced sensing applications, ultra-fast device-to-device (D2D) communications, potential beam steering applications, and non-invasive imaging solutions.
{"title":"Enhancing the performance of graphene and LCP 1x2 rectangular microstrip antenna arrays for terahertz applications using photonic band gap structures","authors":"Hamed Abderrahime Bouzid , Mohammed Belkheir , Allel Mokaddem , Mehdi Rouissat , Djamila Ziani","doi":"10.1016/j.compeleceng.2024.109858","DOIUrl":"10.1016/j.compeleceng.2024.109858","url":null,"abstract":"<div><div>The terahertz (THz) frequency band (0.1–10 THz) has drawn a lot of attention due to the growing demand for greater resolutions, lower latency, faster data rates, and wider bandwidth in 6 G technologies. This range provides data speeds exceeding tens of gigabits per second, large bandwidth, great spectral resolution, and non-ionizing characteristics. THz signals have potential; however they are affected by attenuation, route losses, and atmospheric conditions, necessitating the use of specialised antenna designs. This work presents a 300 GHz rectangular microstrip patch antenna with Graphene as the patch material and Liquid Crystal Polymer (LCP) as the substrate. Photonic band gap (PBG) substrates are used to incorporate cuboid and cylindrical air gaps in square and triangular lattices, hence improving performance. The highest performance is found with cylindrical air gaps in a triangular lattice PBG substrate, which has a bandwidth of 29.56 GHz, a return loss of –48.12 dB, a gain of 10.4 dBi, a directivity of 10.8 dBi, and a radiation efficiency of 91 %. These results establish the proposed antennas as highly effective for broadband and high-speed THz applications, particularly in 6 G systems like advanced sensing applications, ultra-fast device-to-device (D2D) communications, potential beam steering applications, and non-invasive imaging solutions.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109858"},"PeriodicalIF":4.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.compeleceng.2024.109846
Pranali Dandekar , Shailendra S. Aote , Abhijeet Raipurkar
Low-resolution face recognition (LRFR) is an active research area as it is widely used in forensics and surveillance systems. A lot of effort has been put into improving the performance of the system since its inception. Recent deep neural network models have demonstrated outstanding face recognition performance on various face data sets with challenges like variations in pose, illumination, and occlusion and surpassed the performance of humans in these tasks. But, the accuracy of the LRFR method is still a problem. There is no fixed definition for considering any image as a low-resolution (LR) image. Most of the researchers have considered the image below 32 × 32 as a low-resolution image. This paper discusses various methods and algorithms used in improving the performance of low-resolution face recognition (LRFR). We have presented a thorough study of all the processes included in face recognition tasks including face detection, feature mapping, super-resolution, and face recognition. The study includes methodology along with the dataset and performance measures. We have also summarized the study of different datasets used in LRFR along with various source codes used to perform experimentation on LRFR. We have also presented a study in terms of accuracy of different LRFR methods on different dataset. Finally, challenges and research directions are presented to further carry out the LRFR research.
{"title":"Low-resolution face recognition: Review, challenges and research directions","authors":"Pranali Dandekar , Shailendra S. Aote , Abhijeet Raipurkar","doi":"10.1016/j.compeleceng.2024.109846","DOIUrl":"10.1016/j.compeleceng.2024.109846","url":null,"abstract":"<div><div>Low-resolution face recognition (LRFR) is an active research area as it is widely used in forensics and surveillance systems. A lot of effort has been put into improving the performance of the system since its inception. Recent deep neural network models have demonstrated outstanding face recognition performance on various face data sets with challenges like variations in pose, illumination, and occlusion and surpassed the performance of humans in these tasks. But, the accuracy of the LRFR method is still a problem. There is no fixed definition for considering any image as a low-resolution (LR) image. Most of the researchers have considered the image below 32 × 32 as a low-resolution image. This paper discusses various methods and algorithms used in improving the performance of low-resolution face recognition (LRFR). We have presented a thorough study of all the processes included in face recognition tasks including face detection, feature mapping, super-resolution, and face recognition. The study includes methodology along with the dataset and performance measures. We have also summarized the study of different datasets used in LRFR along with various source codes used to perform experimentation on LRFR. We have also presented a study in terms of accuracy of different LRFR methods on different dataset. Finally, challenges and research directions are presented to further carry out the LRFR research.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109846"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-14DOI: 10.1016/j.compeleceng.2024.109867
Mohammed Rhiat , Mohammed Karrouchi , Ilias Atmane , Abdellah Touhafi , Badre Bossoufi , Mishari Metab Almalki , Thamer A.H. Alghamdi , Kamal Hirech
This paper investigates the integration of photovoltaic (PV) energy systems with a DC power converter based on a boost converter designed to optimize the power output for resistive loads such as heat elements for heat generation applications. Emphasizing the role of boost converters in increasing the output voltage of PV systems to efficiently supply resistive loads, the performance and efficiency of this integration is evaluated. The work also addresses the basic principles, control strategies and efficiency considerations associated with the fusion of solar PV systems with synchronous boost converters for resistive load applications. The results demonstrate a peak efficiency of 97%, which decreases to 90.5%, 87.5%, and 84% for resistive loads of 10Ω, 15Ω, and 20Ω, respectively at 80W and for a switching frequency of 60KHz. This indicates that efficiency declines as the value of the resistive load increases. Additionally, the results exhibit a notable efficiency increase of 4.6% by simply raising the switching frequency from 20KHz to 100KHz. Through extensive testing, we have substantiated the effectiveness of employing synchronous boost converters to optimize power output and enhance the overall performance of PV systems when supplying resistive heat elements.
{"title":"Maximizing solar energy efficiency: Optimized DC power conversion for resistive loads","authors":"Mohammed Rhiat , Mohammed Karrouchi , Ilias Atmane , Abdellah Touhafi , Badre Bossoufi , Mishari Metab Almalki , Thamer A.H. Alghamdi , Kamal Hirech","doi":"10.1016/j.compeleceng.2024.109867","DOIUrl":"10.1016/j.compeleceng.2024.109867","url":null,"abstract":"<div><div>This paper investigates the integration of photovoltaic (PV) energy systems with a DC power converter based on a boost converter designed to optimize the power output for resistive loads such as heat elements for heat generation applications. Emphasizing the role of boost converters in increasing the output voltage of PV systems to efficiently supply resistive loads, the performance and efficiency of this integration is evaluated. The work also addresses the basic principles, control strategies and efficiency considerations associated with the fusion of solar PV systems with synchronous boost converters for resistive load applications. The results demonstrate a peak efficiency of 97%, which decreases to 90.5%, 87.5%, and 84% for resistive loads of 10Ω, 15Ω, and 20Ω, respectively at 80W and for a switching frequency of 60KHz. This indicates that efficiency declines as the value of the resistive load increases. Additionally, the results exhibit a notable efficiency increase of 4.6% by simply raising the switching frequency from 20KHz to 100KHz. Through extensive testing, we have substantiated the effectiveness of employing synchronous boost converters to optimize power output and enhance the overall performance of PV systems when supplying resistive heat elements.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109867"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantum computer attacks could easily jeopardize the total security of currently employed encryption systems. As a result, there is an ongoing collaborative effort to design post-quantum cryptography (PQC) algorithms, and to this end many works in the literature have been published. In this paper, five Key Encapsulation Mechanisms (KEM) for PQC that the National Institute of Standards and Technology (NIST) considered as one finalist and 4, fourth round KEMs are reviewed and compared, as well as their software and hardware implementations. Because of the high computational complexity of PQC algorithms, real-time implementation necessitates a large amount of hardware resources, particularly the number of multipliers. Also, traditional performance aspects of each algorithm that are implemented in hardware are compared, such as area, delay, and power, particularly, the memory requirements, resource usage, Lookup tables (LUTs), registers, Flip-flops, maximum operating frequency, number of cycles for encapsulation and decapsulation etc., to quantify and highlight the features of each algorithm. This survey discusses a variety of PQC algorithms that can be used to meet a variety of application needs, including accuracy, hardware resource usage, and throughput. It also informs researchers and engineers about the most recent advances in PQC research in order to identify research problems and improve designs for efficient PQC algorithms.
{"title":"Evaluation of hardware and software implementations for NIST finalist and fourth-round post-quantum cryptography KEMs","authors":"Mamatha Bandaru , Sudha Ellison Mathe , Chirawat Wattanapanich","doi":"10.1016/j.compeleceng.2024.109826","DOIUrl":"10.1016/j.compeleceng.2024.109826","url":null,"abstract":"<div><div>Quantum computer attacks could easily jeopardize the total security of currently employed encryption systems. As a result, there is an ongoing collaborative effort to design post-quantum cryptography (PQC) algorithms, and to this end many works in the literature have been published. In this paper, five Key Encapsulation Mechanisms (KEM) for PQC that the National Institute of Standards and Technology (NIST) considered as one finalist and 4, fourth round KEMs are reviewed and compared, as well as their software and hardware implementations. Because of the high computational complexity of PQC algorithms, real-time implementation necessitates a large amount of hardware resources, particularly the number of multipliers. Also, traditional performance aspects of each algorithm that are implemented in hardware are compared, such as area, delay, and power, particularly, the memory requirements, resource usage, Lookup tables (LUTs), registers, Flip-flops, maximum operating frequency, number of cycles for encapsulation and decapsulation etc., to quantify and highlight the features of each algorithm. This survey discusses a variety of PQC algorithms that can be used to meet a variety of application needs, including accuracy, hardware resource usage, and throughput. It also informs researchers and engineers about the most recent advances in PQC research in order to identify research problems and improve designs for efficient PQC algorithms.</div></div>","PeriodicalId":50630,"journal":{"name":"Computers & Electrical Engineering","volume":"120 ","pages":"Article 109826"},"PeriodicalIF":4.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142655443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}