The rotary axis is the basis of rotational motion. At present, error compensation is the main method to improve the motion accuracy of the rotary axis. The key to error compensation lies in the fast and accurate measurement of the geometric errors of rotary axis. The simultaneous measurement of themultidegree-of-freedom geometric errors and the establishment of the error compensation model are the main means to achieve fast and accurate measurement. Existing methods have problems such as complex error decoupling, the need for servo rotation system, and incomplete error compensation models. To address these issues, we proposed a new method for measuring the four-degree-offreedom geometric errors of the rotary axis based on a circular grating (CG). The significant advantage is its ability to perform full-circle, simultaneous, and continuous measurement without requiring a servo rotation system. Afterward, an error compensation model for the measurement system was established based on the theory of homogeneous coordinate transformation, and the effects of drift, installation, and crosstalk errors on the results were analyzed in detail. During this process, we utilized a fourth-order transformation matrix and developed the first homogeneous coordinate transformation matrix applicable to CGs. The model was used to compensate for the experimental results. The results showed that the radial error motions and tilt error motions are reduced by 87% at most after compensation, and repeatability values of the tilt error motions are reduced by 20% at most. The experimental results verified the effectiveness of the method and the model.
{"title":"Method and Compensation Model for Measuring Geometric Errors of Rotary Axis Based on Circular Grating","authors":"Jiakun Li;Shuai Han;Bintao Zhao;Qixin He;Kaifeng Hu;Yibin Qian;Qibo Feng","doi":"10.1109/JSEN.2025.3613795","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613795","url":null,"abstract":"The rotary axis is the basis of rotational motion. At present, error compensation is the main method to improve the motion accuracy of the rotary axis. The key to error compensation lies in the fast and accurate measurement of the geometric errors of rotary axis. The simultaneous measurement of themultidegree-of-freedom geometric errors and the establishment of the error compensation model are the main means to achieve fast and accurate measurement. Existing methods have problems such as complex error decoupling, the need for servo rotation system, and incomplete error compensation models. To address these issues, we proposed a new method for measuring the four-degree-offreedom geometric errors of the rotary axis based on a circular grating (CG). The significant advantage is its ability to perform full-circle, simultaneous, and continuous measurement without requiring a servo rotation system. Afterward, an error compensation model for the measurement system was established based on the theory of homogeneous coordinate transformation, and the effects of drift, installation, and crosstalk errors on the results were analyzed in detail. During this process, we utilized a fourth-order transformation matrix and developed the first homogeneous coordinate transformation matrix applicable to CGs. The model was used to compensate for the experimental results. The results showed that the radial error motions and tilt error motions are reduced by 87% at most after compensation, and repeatability values of the tilt error motions are reduced by 20% at most. The experimental results verified the effectiveness of the method and the model.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40727-40737"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-28DOI: 10.1109/jsen.2025.3602006
Letian Ai, Saikat Sengupta, Yue Chen
In image-guided interventions, fiducial markers are widely used for medical instrument tracking by attaching them to designated positions. However, due to the difficulty of precise marker placement, obtaining an accurate marker-to-object transformation remains technically challenging, particularly with customized markers or those with non-standard geometries. To accurately identify the transformation, this study introduces a novel calibration method achieved by sequentially touching a fixed tip with landmarks on the object. An inverse sample consensus filter was proposed to remove potential measurement outliers and improve the robustness of the calibration result. Validation through simulations and experiments under two tracking modalities demonstrated superior translational accuracy and improved robustness compared to conventional methods. Specifically, the experiment conducted under electromagnetic tracking system demonstrated a translational error of 0.61 ± 0.11 mm and a rotational error of 0.97 ± 0.18°. The experiment using magnetic resonance imaging system demonstrated a translational error of 0.60 mm and a rotational error of 2.81°. A use case with an intracerebral hemorrhage evacuation robot further verified the feasibility of integrating the calibration method into the image-guided workflow. The proposed method achieved sub-millimeter calibration accuracy across different scenarios, demonstrating its effectiveness and strong potential for diverse research and clinical applications.
{"title":"Marker-to-Object Calibration Using Landmark Touch.","authors":"Letian Ai, Saikat Sengupta, Yue Chen","doi":"10.1109/jsen.2025.3602006","DOIUrl":"10.1109/jsen.2025.3602006","url":null,"abstract":"<p><p>In image-guided interventions, fiducial markers are widely used for medical instrument tracking by attaching them to designated positions. However, due to the difficulty of precise marker placement, obtaining an accurate marker-to-object transformation remains technically challenging, particularly with customized markers or those with non-standard geometries. To accurately identify the transformation, this study introduces a novel calibration method achieved by sequentially touching a fixed tip with landmarks on the object. An inverse sample consensus filter was proposed to remove potential measurement outliers and improve the robustness of the calibration result. Validation through simulations and experiments under two tracking modalities demonstrated superior translational accuracy and improved robustness compared to conventional methods. Specifically, the experiment conducted under electromagnetic tracking system demonstrated a translational error of 0.61 ± 0.11 mm and a rotational error of 0.97 ± 0.18°. The experiment using magnetic resonance imaging system demonstrated a translational error of 0.60 mm and a rotational error of 2.81°. A use case with an intracerebral hemorrhage evacuation robot further verified the feasibility of integrating the calibration method into the image-guided workflow. The proposed method achieved sub-millimeter calibration accuracy across different scenarios, demonstrating its effectiveness and strong potential for diverse research and clinical applications.</p>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 19","pages":"36773-36784"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12539640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145342540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1109/JSEN.2025.3613846
Yanhui Xi;Wenxin Zhu;Zhen Ding;Lanlan Liu
In autonomous driving and robotic navigation, the fusion of multimodal data from LiDAR and cameras relies on accurate extrinsic calibration. However, the calibration accuracy may drop when there is an external disturbance, such as sensor vibrations, temperature fluctuations, and aging. To address this problem, this article presents a novel LiDAR–camera joint calibration network based on cross-modal attention fusion (CMAF) and cross-domain feature extraction (CDFE). The CMAF module is constructed based on region-level matching and pixel-level interaction to improve the cross-modal feature alignment and fusion. To address the semantic inconsistency between encoder and decoder features, the CDFE is designed for a U-shaped architecture with multimodal skip connections to capture large-scale contextual correlations through the transformation from the spatial domain to the frequency domain, and it can maintain semantic consistency through the fusion of global features and original features (residual information) based on the dual-path architecture. Experiments on the KITTI odometry dataset and KITTI-360 dataset show that our network not only significantly outperforms mainstream methods and demonstrates strong generalization capabilities but also achieves high computational efficiency.
{"title":"A Novel LiDAR–Camera Joint Calibration Network Based on Cross-Modal Feature Fusion","authors":"Yanhui Xi;Wenxin Zhu;Zhen Ding;Lanlan Liu","doi":"10.1109/JSEN.2025.3613846","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613846","url":null,"abstract":"In autonomous driving and robotic navigation, the fusion of multimodal data from LiDAR and cameras relies on accurate extrinsic calibration. However, the calibration accuracy may drop when there is an external disturbance, such as sensor vibrations, temperature fluctuations, and aging. To address this problem, this article presents a novel LiDAR–camera joint calibration network based on cross-modal attention fusion (CMAF) and cross-domain feature extraction (CDFE). The CMAF module is constructed based on region-level matching and pixel-level interaction to improve the cross-modal feature alignment and fusion. To address the semantic inconsistency between encoder and decoder features, the CDFE is designed for a U-shaped architecture with multimodal skip connections to capture large-scale contextual correlations through the transformation from the spatial domain to the frequency domain, and it can maintain semantic consistency through the fusion of global features and original features (residual information) based on the dual-path architecture. Experiments on the KITTI odometry dataset and KITTI-360 dataset show that our network not only significantly outperforms mainstream methods and demonstrates strong generalization capabilities but also achieves high computational efficiency.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40849-40860"},"PeriodicalIF":4.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3613587
Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang
In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.
{"title":"An Adaptive Transition Process Recognition and Modeling Framework for Soft Sensor in Complex Engineering Systems","authors":"Chao Ren;Zhen Liu;Zhen Ma;Cunsong Wang","doi":"10.1109/JSEN.2025.3613587","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613587","url":null,"abstract":"In industrial production processes, variations in operating conditions or task requirements often lead to multimodal characteristics in process data. Mode identification is, therefore, essential for effective process monitoring and soft sensing. However, conventional strategies usually overlook transient modes, which hinder the adaptability of soft sensing models during transitional phases. To address these challenges, this article proposes an adaptive transition process recognition and modeling framework (ATPRMF). The framework consists of two key components: first, a Kullback–Leibler divergence (KLD)-based transitional mode identification method that adaptively detects the onset and termination of transitional modes by analyzing distributional differences; and second, a dynamic model fusion mechanism that integrates predictions from multiple models based on mode credibility, adapting to gradual distributional shifts and ensuring reliable predictions. Experimental validation on a real-world ball mill system and the benchmark Tennessee Eastman (TE) process demonstrates that the proposed framework significantly improves prediction accuracy and robustness compared to conventional approaches.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40713-40726"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3612971
Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee
Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.
{"title":"Gas Classification Using Time-Series-to-Image Conversion and CNN-Based Analysis on Array Sensor","authors":"Chang-Hyun Kim;Daewoong Jung;Seung-Hwan Choi;Sanghun Choi;Suwoong Lee","doi":"10.1109/JSEN.2025.3612971","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612971","url":null,"abstract":"Gas detection is essential in industrial and domestic environments to ensure safety and prevent hazardous incidents. Traditional single-sensor time-series analysis often suffers from limitations in accuracy and robustness due to environmental variations. To address this issue, we propose an artificial intelligence (AI)-based approach that transforms 1-D time-series data into 2-D image representations, followed by the classification of acetylene (C2H2), ammonia (NH3), and hydrogen (H2) using convolutional neural networks (CNNs). By utilizing image transformation techniques such as recurrence plots (RPs), Gramian angular fields (GAFs), and Markov transition fields (MTFs), our method significantly enhances feature extraction from sensor data. In this study, we utilized sensor array data obtained from ZnO and CuO thin films previously synthesized using a droplet-based hydrothermal method. By exploiting the temperature-dependent response characteristics of these sensors, we aimed to improve classification accuracy. Experimental results indicate that our proposed approach achieves a 6.2% relative improvement over the LSTM baseline model (90.1%) in classification accuracy compared to the conventional LSTM model applied directly to raw time-series data. This study demonstrates that converting time-series data into image representations substantially improves gas detection performance, offering a scalable and efficient solution for various sensor-based applications. Future research will focus on real-time implementation and further optimization of deep learning architectures.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40690-40702"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11184418","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29DOI: 10.1109/JSEN.2025.3613561
Edward Bao;Cheng Fang;Dezhen Song
Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.
{"title":"A Miniaturized and Low-Cost Fingertip Optoacoustic Pretouch Sensor for Near-Distance Ranging and Material/Structure Classification","authors":"Edward Bao;Cheng Fang;Dezhen Song","doi":"10.1109/JSEN.2025.3613561","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613561","url":null,"abstract":"Precise grasping is required for robotic hands to perform useful functions. Current robotic grasping is limited by the inability to precisely set grasping conditions before physical contact, often resulting in crushing or slipping of the object. Integrated pretouch sensors that detect object parameters at near-distance are highly useful for addressing this issue. This article reports the first miniaturized and low-cost optoacoustic (OA) pretouch sensor integrated into the fingertip of a human-sized bionic robotic hand. The OA pretouch sensor performs distance ranging and material/structure classifications based on OA signals excited on the object surface by laser pulses. The sensorto-object distance is derived from the time delay, and the object material/structure is determined from the frequency spectra using a machine-learning (ML)-based classifier. The high sensitivity of the OA pretouch sensor allows clean OA signals to be captured from single laser pulses and eliminates the need of signal averaging, allowing data-acquisition in real time during continuous finger motion. The simplified and compact design is cost-effective and enables seamless integration of the OA pretouch sensors onto distal portion of a bionic robot finger. Experimental characterization showed a lateral resolution of 0.5 mm and ranging accuracy within 0.3 mm. Machine learning performed with a 100% accuracy in household material/structure classification and 90.4% accuracy in fruit firmness classification. These results confirm that OA pretouch sensors are viable for integration and improving the grasping of robot hands.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40703-40712"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.
{"title":"MULFE: A Sensor-Based Multilevel Feature Enhancement Method for Group Activity Recognition","authors":"Ruohong Huan;Ke Wang;Junhong Dong;Ji Zhang;Peng Chen;Guodao Sun;Ronghua Liang","doi":"10.1109/JSEN.2025.3613557","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3613557","url":null,"abstract":"The recognition of group activities from sensor data faces challenges in effective feature extraction, especially the difficulty of expressing the dynamic changes in member actions, the positional relationships, and the collaborative relationships among members. To address this, this article proposes a sensor-based multilevel feature enhancement (MULFE) method for group activity recognition (GAR). MULFE utilizes an individual action feature extraction network (IAFEN) to extract individual action features and constructs a group location-level feature enhancement (GLLFE) module to capture the group location interaction features among individuals. By combining group location interaction features with individual action features using attentionweighted fusion, location-level enhanced group activity features are achieved, with which the representation of multiple individual features within the group and the complex relational features of individual spatial locations are enhanced. Furthermore, a group spatiotemporal-level feature enhancement (GSLFE) module based on CAMLP-Mixer network is designed, using a multilayer perceptron (MLP) to achieve feature interaction and integration to further obtain group spatiotemporal features. The group spatiotemporal features are combined with the location-level enhanced group activity features to generate the multilevel enhanced group activity features, making the model more suitable for understanding complex group activities. Experiments are conducted on two self-built datasets, UT-Data-gar and Garsensors, to validate and analyze the performance of MULFE. The experimental results demonstrate that MULFE can effectively recognize group activities, particularly maintaining high accuracy and strong robustness in situations with random changes in group size.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40929-40945"},"PeriodicalIF":4.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/JSEN.2025.3612691
Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu
In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.
{"title":"MapcQtNet: A Novel Deeply Integrated Hybrid Soft Sensor Modeling Method for Multistage Quality Prediction in Heat Treatment Process","authors":"Dandan Yao;Yinghua Yang;Xiaozhi Liu;Yunfei Mu","doi":"10.1109/JSEN.2025.3612691","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612691","url":null,"abstract":"In multistage industrial processes, critical quality variables are often difficult to measure in real time. Thus, a common solution is to measure them indirectly in a quality prediction manner through soft sensor methods. Most existing networks for multistage quality prediction focus on the overall transfer of process information between stages, but do not consider exactly how the quality information is transferred. In addition, how to introduce mechanisms in data-driven models to maximize the use of prior knowledge and improve the models’ interpretability is another problem that needs to be addressed. Considering the above two problems, a novel deeply integrated hybrid soft sensor modeling method is proposed for multistage quality prediction, known as the mechanism-aware progressive constraint-based quality transfer network (MapcQtNet). The MapcQtNet consists of two key blocks: quality transfer units (QTUs) and mechanism-aware progressive constraint (MAPC). With the addition of two quality transfer gates, QTUs can simulate the flow of quality information between stages in more detailed ways. In addition, the MAPC innovatively integrates the prior mechanism formulas into the network in a constrained manner, which helps the network to be aware of the process mechanism. With it, the MapcQtNet can not only enhance its interpretability but also gain the ability to achieve unlabeled predictions for the intermediate variable and uncertain mechanism parameter. A real industrial case of the heat treatment process verifies the validity of the proposed MapcQtNet as an advanced soft sensor modeling method for multistage quality prediction.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40913-40928"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-26DOI: 10.1109/JSEN.2025.3612476
Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera
This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.
{"title":"Single-Channel Wearable EEG Using Low-Power Qvar Sensor and Machine Learning for Drowsiness Detection","authors":"Michele Antonio Gazzanti Pugliese Di Cotrone;Marco Balsi;Nicola Picozzi;Alessandro Zampogna;Soufyane Bouchelaghem;Antonio Suppa;Leonardo Davì;Denise Fabeni;Alessandro Gumiero;Ludovica Ferri;Luigi Della Torre;Patrizia Pulitano;Fernanda Irrera","doi":"10.1109/JSEN.2025.3612476","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612476","url":null,"abstract":"This work deals with the fabrication and validation of an innovative wearable single-channel electroencephalogram (EEG) system, designed for real-time monitoring of specific brain activity. It is based on the use of a low-power sensor (Qvar) integrated in a miniaturized electronic platform, and on machine learning (ML) algorithms developed on purpose. The study demonstrates the accuracy (ACC) of Qvar in capturing EEG signals by systematic comparison with a gold standard and the comprehensive analyses in time and frequency domains confirm its reliability across the various EEG frequency bands. In this work, the specific application of drowsiness detection is addressed, leveraging ML algorithms trained and validated on public datasets and, at a more preliminary stage, on real-world data collected specifically for this study under the supervision of trained personnel. The results outline the system’s promise for domestic and outdoor monitoring of specific neurological conditions and applications, such as fatigue management and cognitive state assessment. The Qvar represents a significant step toward accessible and practical wearable EEG technologies, combining portability, ACC, and low-power consumption to enhance user experience, enable massive screening, and broaden the scope of EEG applications.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40668-40679"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.
{"title":"Online Monitoring of Membrane Fouling Based on EIT and Deep Learning","authors":"Xiuyan Li;Yuqi Hou;Shuai Wang;Qi Wang;Jie Wang;Pingjuan Niu","doi":"10.1109/JSEN.2025.3612498","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612498","url":null,"abstract":"Membrane separation has been demonstrated to be the most efficient and effective method for water treatment and desalination. However, membrane filtration modules inevitably become contaminated during the water treatment process, which reduces their filtration efficiency. Electrical impedance tomography (EIT) is an effective method for online monitoring of membrane fouling. However, the accuracy of foulant distribution needs to be improved due to the low resolution of EIT images. In this article, an intelligent monitoring system based on EIT and deep learning is designed to generate the conductivity distribution of the membrane surface and track and monitor the dynamic changes of the membrane pollution in real-time. A deep-learning architecture for EIT image reconstruction, namely, TransUNet + Root-Net, is proposed. TransUNet combines the global perception ability of the Transformer with the local feature extraction advantages of UNet. To address the cross-domain generalization challenge encountered when training with simulation data and validating with experimental data, this article designs an unsupervised dual-domain mapping network (Root-Net) to map simulation data into a form resembling experimental data. By using the large amount of simulation data labels mapped by Root-net to train TransUNet, the model’s ability to represent the dynamic distribution of real membrane fouling is significantly improved. The results indicate 2.72% error for the TransUNet + Root-Net method. This approach more accurately characterizes the location and shape of the fouling, enabling real-time monitoring of its spatial distribution and providing insights into the evolution of membrane fouling.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40680-40689"},"PeriodicalIF":4.3,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}