Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3613587
Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang
In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.
{"title":"An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems","authors":"Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang","doi":"10.1109/JSEN.2025.3613587","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613587","url":null,"abstract":"In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40713-40726"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3612971
Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee
Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.
{"title":"Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor","authors":"Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee","doi":"10.1109/JSEN.2025.3612971","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612971","url":null,"abstract":"Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40690-40702"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3613561
Edward Bao;Cheng Fang;Dezhen Song
Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.
{"title":"A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification","authors":"Edward Bao;Cheng Fang;Dezhen Song","doi":"10.1109/JSEN.2025.3613561","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613561","url":null,"abstract":"Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40703-40712"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.
{"title":"MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition","authors":"Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang","doi":"10.1109/JSEN.2025.3613557","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613557","url":null,"abstract":"The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40929-40945"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/JSEN.2025.3612691
Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu
In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.
{"title":"MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process","authors":"Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu","doi":"10.1109/JSEN.2025.3612691","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612691","url":null,"abstract":"In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40913-40928"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/JSEN.2025.3612476
Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera
This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.
{"title":"Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection","authors":"Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera","doi":"10.1109/JSEN.2025.3612476","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612476","url":null,"abstract":"This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40668-40679"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.
{"title":"Online Monitoring of Membrane Fouling Based on EIT and Deep Learning","authors":"Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu","doi":"10.1109/JSEN.2025.3612498","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612498","url":null,"abstract":"Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40680-40689"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.
{"title":"Flux-Directional Orthogonal Differential Probe for Low-Frequency Eddy-Current Nondestructive Testing","authors":"Junmei Tian;Jie Zhang;Wujun Kui;Xiaoguang Cao;Ziqi Liang","doi":"10.1109/JSEN.2025.3611949","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3611949","url":null,"abstract":"Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40651-40659"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1109/JSEN.2025.3612050
Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang
Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.
{"title":"Multieye Visual Fusion Encoderless Control With Permanent Magnet Synchronous Machines","authors":"Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang","doi":"10.1109/JSEN.2025.3612050","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612050","url":null,"abstract":"Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40901-40912"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1109/JSEN.2025.3612094
Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee
Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.
{"title":"Sensing Force Dynamics of Prehensile Grip During Object Slippage Using a Slip Inducing Device","authors":"Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee","doi":"10.1109/JSEN.2025.3612094","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612094","url":null,"abstract":"Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40660-40667"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}