Open-circuit switch faults (OCSFs) in power semiconductor switches are caused by wire bonding failures, gate driver malfunction, surge voltage/current, electromagnetic interference, and cosmic radiation. Under OCSFs, the signal characteristics are not excessively high, but prolonged OCSFs risk cascading system failures. This letter presents a comprehensive analysis of various deep neural network (DNN)-based architectures, such as long short-term memory (LSTM) and convolutional neural network (CNN), to diagnose multiclass OCSFs in three-phase active front-end rectifiers (TP-AFRs). A novel multisensor time-series sequence (MTSS) dataset is acquired at 500 Hz, comprising 624 observations from 19 sensor signals for single, double, and triple-switch OCSFs. The intertwining issue in the MTSS dataset is visualized using t-SNE, and the initial experiments with support vector machine (SVM) rendered the highest test accuracy of 93% against k-nearest neighbor, artificial neural network, and decision tree classifiers. Further, our investigations revealed that an architecture with two-layer CNN, one-layer LSTM, and one fully connected layer achieves a competitive testing accuracy of 95.03%, showing an improvement of 2.03% from the SVM classifier, and 7.03% from the one-layer LSTM network. These findings demonstrate the potential of this approach for enhancing reliability of TP-AFRs with the direct application of downsampled raw electrical signals.
{"title":"Deep Learning-Based Multiswitch Open-Circuit Fault Diagnosis for Active Front-End Rectifiers Using Multisensor Signals","authors":"Sourabh Ghosh;Ehtesham Hassan;Asheesh Kumar Singh;Sri Niwas Singh","doi":"10.1109/LSENS.2024.3524033","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3524033","url":null,"abstract":"Open-circuit switch faults (OCSFs) in power semiconductor switches are caused by wire bonding failures, gate driver malfunction, surge voltage/current, electromagnetic interference, and cosmic radiation. Under OCSFs, the signal characteristics are not excessively high, but prolonged OCSFs risk cascading system failures. This letter presents a comprehensive analysis of various deep neural network (DNN)-based architectures, such as long short-term memory (LSTM) and convolutional neural network (CNN), to diagnose multiclass OCSFs in three-phase active front-end rectifiers (TP-AFRs). A novel multisensor time-series sequence (MTSS) dataset is acquired at 500 Hz, comprising 624 observations from 19 sensor signals for single, double, and triple-switch OCSFs. The intertwining issue in the MTSS dataset is visualized using t-SNE, and the initial experiments with support vector machine (SVM) rendered the highest test accuracy of 93% against k-nearest neighbor, artificial neural network, and decision tree classifiers. Further, our investigations revealed that an architecture with two-layer CNN, one-layer LSTM, and one fully connected layer achieves a competitive testing accuracy of 95.03%, showing an improvement of 2.03% from the SVM classifier, and 7.03% from the one-layer LSTM network. These findings demonstrate the potential of this approach for enhancing reliability of TP-AFRs with the direct application of downsampled raw electrical signals.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1109/LSENS.2024.3523334
Rahul Mishra;Aishwarya Soni;Ayush Jain;Priyanka Lalwani;Raj Shah
Recent years have witnessed significant growth in sensors-based human locomotion activities recognition due to the availability of low-cost, low-power, and compact sensors and microcontroller units. While significant research has been conducted on human locomotion activity recognition using inertial sensors, most prior studies heavily rely on data from all axes of the sensors. However, the importance of dominant axes in reducing training and inference time has been largely overlooked in these investigations. This letter presents a novel approach, dominant axes-human activity recognition, which aims to identify the dominant axes of inertial sensors to effectively recognize human locomotion activities. The proposed approach effectively reduces both training and inference time while still achieving substantial accuracy. The approach begins with data collection through dedicated smartphone applications and sensory probes. Subsequently, the collected sensory data undergoes preprocessing and annotation for model training. Further, cross-validation is performed during the training phase to determine the dominant axes, leveraging information about the orientation within the dataset. Finally, this work conducts experiments on the collected dataset to assess the approach's efficacy in terms of accuracy and training time.
{"title":"Optimizing Activity Recognition Through Dominant Axis Identification in Inertial Sensors","authors":"Rahul Mishra;Aishwarya Soni;Ayush Jain;Priyanka Lalwani;Raj Shah","doi":"10.1109/LSENS.2024.3523334","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3523334","url":null,"abstract":"Recent years have witnessed significant growth in sensors-based human locomotion activities recognition due to the availability of low-cost, low-power, and compact sensors and microcontroller units. While significant research has been conducted on human locomotion activity recognition using inertial sensors, most prior studies heavily rely on data from all axes of the sensors. However, the importance of dominant axes in reducing training and inference time has been largely overlooked in these investigations. This letter presents a novel approach, dominant axes-human activity recognition, which aims to identify the dominant axes of inertial sensors to effectively recognize human locomotion activities. The proposed approach effectively reduces both training and inference time while still achieving substantial accuracy. The approach begins with data collection through dedicated smartphone applications and sensory probes. Subsequently, the collected sensory data undergoes preprocessing and annotation for model training. Further, cross-validation is performed during the training phase to determine the dominant axes, leveraging information about the orientation within the dataset. Finally, this work conducts experiments on the collected dataset to assess the approach's efficacy in terms of accuracy and training time.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142940715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most EEG-based biometrics rely on either convolutional neural networks (CNNs) or graph convolutional neural networks (GCNNs) for personal authentication, potentially overlooking the limitations of each approach. To address this, we propose EEG-BBNet, a hybrid network that combines CNNs and GCNNs. EEG-BBNet leverages CNN's capability for automatic feature extraction and the GCNN's ability to learn connectivity patterns between EEG electrodes through graph representation. We evaluate its performance against solely CNN-based and graph-based models across three brain–computer interface tasks, focusing on daily motor and sensory activities. The results show that while EEG-BBNet with Rho index functional connectivity metric outperforms graph-based models, it initially lags behind CNN-based models. However, with additional fine-tuning, EEG-BBNet surpasses CNN-based models, achieving a correct recognition rate of approximately 90%. This improvement enables EEG-BBNet to adapt its learning in new sessions and to acquire different domain knowledge across various BCI tasks (e.g., motor imagery to steady-state visually evoked potentials), demonstrating promise for practical authentication.
{"title":"EEG-BBNet: A Hybrid Framework for Brain Biometric Using Graph Connectivity","authors":"Payongkit Lakhan;Nannapas Banluesombatkul;Natchaya Sricom;Phattarapong Sawangjai;Soravitt Sangnark;Tohru Yagi;Theerawit Wilaiprasitporn;Wanumaidah Saengmolee;Tulaya Limpiti","doi":"10.1109/LSENS.2024.3522981","DOIUrl":"https://doi.org/10.1109/LSENS.2024.3522981","url":null,"abstract":"Most EEG-based biometrics rely on either convolutional neural networks (CNNs) or graph convolutional neural networks (GCNNs) for personal authentication, potentially overlooking the limitations of each approach. To address this, we propose EEG-BBNet, a hybrid network that combines CNNs and GCNNs. EEG-BBNet leverages CNN's capability for automatic feature extraction and the GCNN's ability to learn connectivity patterns between EEG electrodes through graph representation. We evaluate its performance against solely CNN-based and graph-based models across three brain–computer interface tasks, focusing on daily motor and sensory activities. The results show that while EEG-BBNet with Rho index functional connectivity metric outperforms graph-based models, it initially lags behind CNN-based models. However, with additional fine-tuning, EEG-BBNet surpasses CNN-based models, achieving a correct recognition rate of approximately 90%. This improvement enables EEG-BBNet to adapt its learning in new sessions and to acquire different domain knowledge across various BCI tasks (e.g., motor imagery to steady-state visually evoked potentials), demonstrating promise for practical authentication.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rate-integrating gyroscopes provide significant advantages in temperature stability and bandwidth. However, their performance is not fully realized due to the X–Y