Pub Date : 2025-10-23DOI: 10.1109/JSEN.2025.3622306
D. S. Parihar;Ripul Ghosh
Wildlife conflict has become a serious concern due to increasing animal mortality from rail-induced accidents on railway tracks passing through the forest region. Monitoring the movement of wild animals near a railway track remains challenging due to the complex terrain, varied landscapes, and diverse biodiversity. This article presents an optimized hybrid 1-D convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture to classify wildlife and other ground activities from seismic data generated in a forest environment. The proposed method automatically searches the high-level patterns sequentially from the multidomain features that are extracted from the principal modes of variational mode decomposition (VMD) of seismic signals. Furthermore, the classification results are compared with the standalone CNN and BiLSTM, where the proposed method outperforms with an average accuracy of 78.11 ± 4.28% and the lowest false detection rate.
{"title":"A Hybrid CNN–BiLSTM Approach for Wildlife Detection Nearby Railway Track in a Forest","authors":"D. S. Parihar;Ripul Ghosh","doi":"10.1109/JSEN.2025.3622306","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3622306","url":null,"abstract":"Wildlife conflict has become a serious concern due to increasing animal mortality from rail-induced accidents on railway tracks passing through the forest region. Monitoring the movement of wild animals near a railway track remains challenging due to the complex terrain, varied landscapes, and diverse biodiversity. This article presents an optimized hybrid 1-D convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture to classify wildlife and other ground activities from seismic data generated in a forest environment. The proposed method automatically searches the high-level patterns sequentially from the multidomain features that are extracted from the principal modes of variational mode decomposition (VMD) of seismic signals. Furthermore, the classification results are compared with the standalone CNN and BiLSTM, where the proposed method outperforms with an average accuracy of 78.11 ± 4.28% and the lowest false detection rate.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 23","pages":"43507-43515"},"PeriodicalIF":4.3,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-20DOI: 10.1109/JSEN.2025.3621436
Shuai Zhang;Yongchao Dong;Shihao Huang;Gaoping Xu;Ruizhou Wang;Han Wang;Mengyu Wang
Whispering gallery mode (WGM) microresonators have shown great potential for precise displacement measurement due to their compact size, ultrahigh sensitivity, and rapid response. However, traditional WGM-based displacement sensors are susceptible to environmental noise interference, resulting in reduced accuracy and too long signal demodulation time. To address these limitations, this article proposes a multimodal displacement sensing method for surface nanoscale axial photonics (SNAPs) resonators based on deep learning (DL) techniques. A 1-D convolutional neural network (1D-CNN) is used to extract features from the full spectrum, which significantly improves the noise immunity and sensing accuracy while avoiding the time-consuming spectral preprocessing. Experimental results show that the average prediction error is as low as 0.05 μm and the maximum error does not exceed 1.4 μm when using the 1D-CNN network for displacement measurements. This work provides an effective solution for fast, highly accurate and robust displacement sensing.
{"title":"Deep Learning-Based SNAP Microresonator Displacement Sensing Technology","authors":"Shuai Zhang;Yongchao Dong;Shihao Huang;Gaoping Xu;Ruizhou Wang;Han Wang;Mengyu Wang","doi":"10.1109/JSEN.2025.3621436","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3621436","url":null,"abstract":"Whispering gallery mode (WGM) microresonators have shown great potential for precise displacement measurement due to their compact size, ultrahigh sensitivity, and rapid response. However, traditional WGM-based displacement sensors are susceptible to environmental noise interference, resulting in reduced accuracy and too long signal demodulation time. To address these limitations, this article proposes a multimodal displacement sensing method for surface nanoscale axial photonics (SNAPs) resonators based on deep learning (DL) techniques. A 1-D convolutional neural network (1D-CNN) is used to extract features from the full spectrum, which significantly improves the noise immunity and sensing accuracy while avoiding the time-consuming spectral preprocessing. Experimental results show that the average prediction error is as low as 0.05 μm and the maximum error does not exceed 1.4 μm when using the 1D-CNN network for displacement measurements. This work provides an effective solution for fast, highly accurate and robust displacement sensing.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 23","pages":"43500-43506"},"PeriodicalIF":4.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-17DOI: 10.1109/JSEN.2025.3578608
Xingqi Na;Zhijia Zhang;Huaici Zhao;Shujun Jia
In the field of autonomous driving, 3-D object detection is a crucial technology. Visual sensors are essential in this area and are widely used for 3-D object detection tasks. Recent advancements in monocular 3-D object detection have introduced depth estimation branches within the network architecture. This integration leverages predicted depth information to address the depth perception limitations inherent in monocular sensors, thereby improving detection accuracy. However, many existing methods prioritize lightweight designs at the expense of depth estimation accuracy. To enhance this accuracy, we propose the pseudo depth feature extraction (PDFE) module. This module extracts features by fusing adaptive scale information and simulating disparity, leading to more precise depth predictions. Additionally, we present a hybrid model that combines convolutional neural networks (CNNs) and Transformer architectures. The model employs diverse feature fusion strategies, including depth-guided fusion (DGF) and a Transformer decoder. It also utilizes a convolutional mixture transformer (CMT) encoder to enhance the representation of both local and global features. Building on these innovations, we developed the MonoICT network model and evaluated its performance using the KITTI dataset. Our experimental results indicate that our approach is competitive with recent state-of-the-art methods, outperforming them in the pedestrian and cyclist categories.
{"title":"MonoICT: A Monocular 3-D Object Detection Model Integrating CNN and Transformer","authors":"Xingqi Na;Zhijia Zhang;Huaici Zhao;Shujun Jia","doi":"10.1109/JSEN.2025.3578608","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3578608","url":null,"abstract":"In the field of autonomous driving, 3-D object detection is a crucial technology. Visual sensors are essential in this area and are widely used for 3-D object detection tasks. Recent advancements in monocular 3-D object detection have introduced depth estimation branches within the network architecture. This integration leverages predicted depth information to address the depth perception limitations inherent in monocular sensors, thereby improving detection accuracy. However, many existing methods prioritize lightweight designs at the expense of depth estimation accuracy. To enhance this accuracy, we propose the pseudo depth feature extraction (PDFE) module. This module extracts features by fusing adaptive scale information and simulating disparity, leading to more precise depth predictions. Additionally, we present a hybrid model that combines convolutional neural networks (CNNs) and Transformer architectures. The model employs diverse feature fusion strategies, including depth-guided fusion (DGF) and a Transformer decoder. It also utilizes a convolutional mixture transformer (CMT) encoder to enhance the representation of both local and global features. Building on these innovations, we developed the MonoICT network model and evaluated its performance using the KITTI dataset. Our experimental results indicate that our approach is competitive with recent state-of-the-art methods, outperforming them in the pedestrian and cyclist categories.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40763-40774"},"PeriodicalIF":4.3,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1109/jsen.2025.3620154
Jacynthe Francoeur, Raman Kashyap, Samuel Kadoury, Jin Seob Kim, Iulian Iordachita
This paper presents a systematic evaluation of fiber optic shape sensing models for prostate needle interventions using a single needle embedded with a three-fiber optical frequency domain reflectometry (OFDR) sensor. Two reconstruction algorithms were evaluated: (1) Linear Interpolation Models (LIM), a geometric method that directly estimates local curvature and orientation from distributed strain measurements, and (2) the Lie-Group Theoretic Model (LGTM), a physics-informed elastic-rod model that globally fits curvature profiles while accounting for tissue-needle interaction. Using software-defined strain-point selection, both sparse and quasi-distributed sensing configurations were emulated from the same OFDR data. Experiments were conducted in homogeneous and two-layer gel phantoms, ex vivo tissue, and a whole-body cadaveric pig model. While the repeated-measures ANOVA did not detect any significant differences, the Friedman test analysis revealed statistically significant differences in RMSEs between LIM and LGTM (p < 0.05), with LIM outperforming LGTM in the ex vivo tissue scenario. LIM also achieved over 50-fold faster computation (< 1 ms vs. > 40 ms per shape), enabling real-time use. These findings highlight the trade-offs between model complexity, sensing density, computational load, and tissue variability, providing guidance for selecting shape-sensing strategies in clinical and robotic needle interventions.
{"title":"Evaluation of Fiber Optic Shape Sensing Models for Minimally Invasive Prostate Needle Procedures Using OFDR Data.","authors":"Jacynthe Francoeur, Raman Kashyap, Samuel Kadoury, Jin Seob Kim, Iulian Iordachita","doi":"10.1109/jsen.2025.3620154","DOIUrl":"10.1109/jsen.2025.3620154","url":null,"abstract":"<p><p>This paper presents a systematic evaluation of fiber optic shape sensing models for prostate needle interventions using a single needle embedded with a three-fiber optical frequency domain reflectometry (OFDR) sensor. Two reconstruction algorithms were evaluated: (1) Linear Interpolation Models (LIM), a geometric method that directly estimates local curvature and orientation from distributed strain measurements, and (2) the Lie-Group Theoretic Model (LGTM), a physics-informed elastic-rod model that globally fits curvature profiles while accounting for tissue-needle interaction. Using software-defined strain-point selection, both sparse and quasi-distributed sensing configurations were emulated from the same OFDR data. Experiments were conducted in homogeneous and two-layer gel phantoms, <i>ex vivo</i> tissue, and a whole-body cadaveric pig model. While the repeated-measures ANOVA did not detect any significant differences, the Friedman test analysis revealed statistically significant differences in RMSEs between LIM and LGTM (p < 0.05), with LIM outperforming LGTM in the <i>ex vivo</i> tissue scenario. LIM also achieved over 50-fold faster computation (< 1 ms vs. > 40 ms per shape), enabling real-time use. These findings highlight the trade-offs between model complexity, sensing density, computational load, and tissue variability, providing guidance for selecting shape-sensing strategies in clinical and robotic needle interventions.</p>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12588074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145457292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-16DOI: 10.1109/JSEN.2025.3620015
Simanta Das;Ripudaman Singh
Distributed clustering routing protocols are acknowledged as effective methods for minimizing and balancing energy consumption in wireless sensor networks (WSNs). In these protocols, the random distribution of cluster heads (CHs) results in the presence of several isolated sensor nodes (ISNs). In general, an ISN consumes more energy than a cluster member (CM) sensor node (SN). Therefore, ISNs located far from the sink can significantly reduce the network lifetime. In this article, we propose a relay cluster head based traffic and energy-aware routing (RCHBTEAR) protocol for heterogeneous WSNs. The RCHBTEAR protocol improves the network lifetime by reducing the energy consumption of SNs. For this, we consider both the energy and traffic heterogeneities of SNs during the election of CHs. Furthermore, we select relay CHs (RCHs) from the existing CHs to reduce the energy consumption of ISNs located far from the sink. Furthermore, we propose an optimized super round (SR) technique that eliminates the need for reclustering in every round. Simulation results show that the RCHBTEAR protocol significantly improves the network lifetime.
{"title":"A Relay Cluster Head Based Traffic and Energy-Aware Routing Protocol for Heterogeneous WSNs","authors":"Simanta Das;Ripudaman Singh","doi":"10.1109/JSEN.2025.3620015","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3620015","url":null,"abstract":"Distributed clustering routing protocols are acknowledged as effective methods for minimizing and balancing energy consumption in wireless sensor networks (WSNs). In these protocols, the random distribution of cluster heads (CHs) results in the presence of several isolated sensor nodes (ISNs). In general, an ISN consumes more energy than a cluster member (CM) sensor node (SN). Therefore, ISNs located far from the sink can significantly reduce the network lifetime. In this article, we propose a relay cluster head based traffic and energy-aware routing (RCHBTEAR) protocol for heterogeneous WSNs. The RCHBTEAR protocol improves the network lifetime by reducing the energy consumption of SNs. For this, we consider both the energy and traffic heterogeneities of SNs during the election of CHs. Furthermore, we select relay CHs (RCHs) from the existing CHs to reduce the energy consumption of ISNs located far from the sink. Furthermore, we propose an optimized super round (SR) technique that eliminates the need for reclustering in every round. Simulation results show that the RCHBTEAR protocol significantly improves the network lifetime.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42350-42363"},"PeriodicalIF":4.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-15DOI: 10.1109/JSEN.2025.3619651
Yang Yang;Yue Song;Xiaochun Shang;Qingshuang Mu;Beichen Li;Yue Lang
Multisensor fusion combines the benefits of each sensor, resulting in a thorough and reliable motion recognition even in challenging measurement environments. Meanwhile, even with the environmental robustness attained through sensor integration, the recognition model continues to face challenges in cross-target scenarios. In summary, the recognition model is consistently trained using the measurement dataset; however, its performance may decline when applied to unfamiliar subjects. This article highlights this issue and presents a cross-target human motion recognition model for the radar–camera measurement system. We have developed a modal-specific semantic interaction mechanism that allows the feature extractor to recognize different individuals, thereby removing identity information during the feature extraction process. Furthermore, we have also put forward a meta-prototype learning scheme that suitably adjusts the probability distribution to enhance the generalization capability of the recognition model. To emphasize, the proposed model is implemented without altering the primary designed network architecture, indicating that there is no additional computational burden during testing. In comparison with five multimodal learning algorithms, we have validated the effectiveness of our model, highlighting that it surpasses previous radar–video-based methods by more than 5% in recognition accuracy. Through experiments using public datasets under different dataset conditions, we verified the generalization ability of our model. Ablation studies and additional parameter studies have been conducted, enabling a thorough examination of each design.
{"title":"Human Motion Recognition Based on Videos and Radar Spectrograms in Cross-Target Scenarios","authors":"Yang Yang;Yue Song;Xiaochun Shang;Qingshuang Mu;Beichen Li;Yue Lang","doi":"10.1109/JSEN.2025.3619651","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3619651","url":null,"abstract":"Multisensor fusion combines the benefits of each sensor, resulting in a thorough and reliable motion recognition even in challenging measurement environments. Meanwhile, even with the environmental robustness attained through sensor integration, the recognition model continues to face challenges in cross-target scenarios. In summary, the recognition model is consistently trained using the measurement dataset; however, its performance may decline when applied to unfamiliar subjects. This article highlights this issue and presents a cross-target human motion recognition model for the radar–camera measurement system. We have developed a modal-specific semantic interaction mechanism that allows the feature extractor to recognize different individuals, thereby removing identity information during the feature extraction process. Furthermore, we have also put forward a meta-prototype learning scheme that suitably adjusts the probability distribution to enhance the generalization capability of the recognition model. To emphasize, the proposed model is implemented without altering the primary designed network architecture, indicating that there is no additional computational burden during testing. In comparison with five multimodal learning algorithms, we have validated the effectiveness of our model, highlighting that it surpasses previous radar–video-based methods by more than 5% in recognition accuracy. Through experiments using public datasets under different dataset conditions, we verified the generalization ability of our model. Ablation studies and additional parameter studies have been conducted, enabling a thorough examination of each design.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42400-42412"},"PeriodicalIF":4.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/JSEN.2025.3618944
Hongwei Fan;Jiewen Gao;Xiangang Cao;Xuhui Zhang
Fault diagnosis of motor as a critical component in industrial systems plays a vital role in ensuring equipment safety and improving production efficiency. To address the challenge of weak signal characteristics under low rotational speed and load-fluctuation conditions, this article proposes a multi-modal feature fusion method that integrates time-domain features with frequencydomain graph features and an improved graph convolutional network and graph attention network fusion (GCN-GAT) fault diagnosis model based on graph neural networks (GNNs). Firstly, an adaptive K-nearest neighbor (KNN) graph construction method is introduced to build graph data based on frequency-domain information. Then, by improving the basic GNN architecture, a novel GCN-GAT model is developed to extract both local and global spatial features of graph nodes, with residual connections incorporated to improve model expressiveness and training stability. Key time-domain features are selected using a random forest (RF) algorithm, and an attention-based weighted fusion module is designed to adaptively integrate these time-domain features and frequency-domain graph features, thereby enhancing the model's adaptability to complex operating conditions. Experimental data were collected on a self-built test platform under normal conditions, mechanical faults of bearing and rotor, and electrical faults of stator and rotor, with load variations at speeds of 450, 900, and 1350 r/min, while data at 2250 r/min serve as a high rotational speed comparison item. Results demonstrate that the proposed model achieves high accuracy and robustness in motor fault diagnosis under low rotational speed loadfluctuation conditions, consistently exceeding an accuracy of 95%, which confirms the effectiveness and robustness of the proposed fault diagnosis method.
{"title":"A Novel Motor Fault Diagnosis Method Based on Adaptive Frequency-Domain Graph and Time-Domain Feature Fusion With GCN-GAT","authors":"Hongwei Fan;Jiewen Gao;Xiangang Cao;Xuhui Zhang","doi":"10.1109/JSEN.2025.3618944","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3618944","url":null,"abstract":"Fault diagnosis of motor as a critical component in industrial systems plays a vital role in ensuring equipment safety and improving production efficiency. To address the challenge of weak signal characteristics under low rotational speed and load-fluctuation conditions, this article proposes a multi-modal feature fusion method that integrates time-domain features with frequencydomain graph features and an improved graph convolutional network and graph attention network fusion (GCN-GAT) fault diagnosis model based on graph neural networks (GNNs). Firstly, an adaptive K-nearest neighbor (KNN) graph construction method is introduced to build graph data based on frequency-domain information. Then, by improving the basic GNN architecture, a novel GCN-GAT model is developed to extract both local and global spatial features of graph nodes, with residual connections incorporated to improve model expressiveness and training stability. Key time-domain features are selected using a random forest (RF) algorithm, and an attention-based weighted fusion module is designed to adaptively integrate these time-domain features and frequency-domain graph features, thereby enhancing the model's adaptability to complex operating conditions. Experimental data were collected on a self-built test platform under normal conditions, mechanical faults of bearing and rotor, and electrical faults of stator and rotor, with load variations at speeds of 450, 900, and 1350 r/min, while data at 2250 r/min serve as a high rotational speed comparison item. Results demonstrate that the proposed model achieves high accuracy and robustness in motor fault diagnosis under low rotational speed loadfluctuation conditions, consistently exceeding an accuracy of 95%, which confirms the effectiveness and robustness of the proposed fault diagnosis method.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42334-42349"},"PeriodicalIF":4.3,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-13DOI: 10.1109/JSEN.2025.3618327
Xiaohu Zheng;Zhouzhi Gu
The blade, as a core component in modern industrial systems, exerts significant influence on the performance of both aeroengines and steam turbines through its inspection accuracy and efficiency. Blade inspection serves dual purposes: evaluating machining precision for error compensation and enabling failure diagnosis for expedited maintenance. This study proposes an electroforming-based planar coil sensor ($Phi 3.5 times 1.5~ text{mm}$ ) for key-point sampling, optimizing measurement efficiency. The sensor’s fabrication methodology is systematically detailed, and its efficacy is validated through numerical simulations and experimental trials. Results demonstrate >95% detection accuracy for defects of varying depths and geometries, with consistent response characteristics. Case studies confirm the sensor’s capability to reliably identify internal/external defects using minimal measurement points while sustaining realtime performance.
叶片作为现代工业系统的核心部件,其检测精度和效率对航空发动机和汽轮机的性能有着重要的影响。叶片检查有双重目的:评估加工精度以补偿误差,并使故障诊断能够加速维护。本研究提出了一种基于电成型的平面线圈传感器($Phi 3.5 times 1.5~ text{mm}$)用于关键点采样,优化了测量效率。系统地阐述了传感器的制作方法,并通过数值模拟和实验验证了传感器的有效性。结果表明,对于不同深度和不同几何形状的缺陷,该方法的检测准确率为95%,且响应特性一致。案例研究证实了传感器在保持实时性能的同时,使用最小的测量点可靠地识别内部/外部缺陷的能力。
{"title":"Investigation of a Blade Inspection Method by Using Double Planar Coils","authors":"Xiaohu Zheng;Zhouzhi Gu","doi":"10.1109/JSEN.2025.3618327","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3618327","url":null,"abstract":"The blade, as a core component in modern industrial systems, exerts significant influence on the performance of both aeroengines and steam turbines through its inspection accuracy and efficiency. Blade inspection serves dual purposes: evaluating machining precision for error compensation and enabling failure diagnosis for expedited maintenance. This study proposes an electroforming-based planar coil sensor (<inline-formula> <tex-math>$Phi 3.5 times 1.5~ text{mm}$ </tex-math></inline-formula>) for key-point sampling, optimizing measurement efficiency. The sensor’s fabrication methodology is systematically detailed, and its efficacy is validated through numerical simulations and experimental trials. Results demonstrate >95% detection accuracy for defects of varying depths and geometries, with consistent response characteristics. Case studies confirm the sensor’s capability to reliably identify internal/external defects using minimal measurement points while sustaining realtime performance.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42327-42333"},"PeriodicalIF":4.3,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-08DOI: 10.1109/JSEN.2025.3617319
Kai Zhang;Yundan Liu;Yali Wang;Xiaowen Zhang
In the hot strip rolling mill (HSRM) process, accurate prediction and control of the strip crown are critical for quality assurance. In order to cope with this challenge, this study designed a real-time prediction and update system of strip crown based on the cloud-edgeend collaboration framework. First, this work optimizes the traditional variational autoencoder (VAE) network by refining the loss function structure to improve feature extraction and prediction, tailoring the VAE to the unique requirements of crown prediction. Second, according to the characteristics of multistand distribution in the HSRM process, a distributed framework is constructed to enable distributed extraction and fusion of crown-related features, generating predictions based on the fused features. In addition, to adapt to different strip specifications, a global and local update method is proposed to dynamically optimize model parameters, marking a notable advancement in adaptability for real-time industrial applications. The application results from two actual HSRM production lines (2150 and 1580 mm) demonstrate that the proposed method can decrease the prediction error to 2.650 $mu$ m on average. Finally, by using a cloud-edge-end prototype system with a 50-ms sampling interval, the system enables real-time prediction and supports online local model updates, significantly improving traditional methods while enhancing both operational efficiency and quality control.
{"title":"Application of Variational Autoencoder Network to Real-Time Prediction of Steel Crown in the Hot Strip Rolling Mill Process","authors":"Kai Zhang;Yundan Liu;Yali Wang;Xiaowen Zhang","doi":"10.1109/JSEN.2025.3617319","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3617319","url":null,"abstract":"In the hot strip rolling mill (HSRM) process, accurate prediction and control of the strip crown are critical for quality assurance. In order to cope with this challenge, this study designed a real-time prediction and update system of strip crown based on the cloud-edgeend collaboration framework. First, this work optimizes the traditional variational autoencoder (VAE) network by refining the loss function structure to improve feature extraction and prediction, tailoring the VAE to the unique requirements of crown prediction. Second, according to the characteristics of multistand distribution in the HSRM process, a distributed framework is constructed to enable distributed extraction and fusion of crown-related features, generating predictions based on the fused features. In addition, to adapt to different strip specifications, a global and local update method is proposed to dynamically optimize model parameters, marking a notable advancement in adaptability for real-time industrial applications. The application results from two actual HSRM production lines (2150 and 1580 mm) demonstrate that the proposed method can decrease the prediction error to 2.650 <inline-formula> <tex-math>$mu$ </tex-math></inline-formula>m on average. Finally, by using a cloud-edge-end prototype system with a 50-ms sampling interval, the system enables real-time prediction and supports online local model updates, significantly improving traditional methods while enhancing both operational efficiency and quality control.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42389-42399"},"PeriodicalIF":4.3,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}