Pub Date : 2025-11-12DOI: 10.1109/JSEN.2025.3629234
Baosheng Wang;Xiaoxue Ping;Yang Liu
Soybean is an important food and economic crop, yet it is often subject to adulteration through the mixing of old and new beans, which threatens food safety and market fairness. This study proposes a soybean adulteration detection method based on an adaptive feature complementary classification network (AFCC-Net) and an electronic nose (e-nose) system. First, the e-nose system collects volatile compound data from soybeans with varying adulteration ratios, and t-distributed stochastic neighbor embedding (t-SNE) is employed to visualize differences. Then, an adaptive feature complementary computing module (AFCCM) is introduced, which integrates local convolutional operations with a global self-attention mechanism to complementarily fuse gas features. Residual connections are incorporated to enhance feature representation, enabling deep feature extraction from gas data. Finally, a lightweight AFCC-Net is designed to identify soybeans with different adulteration ratios. Ablation experiments validate the rationality of the AFCCM design. Compared with lightweight deep learning methods and state-of-the-art gas information classification approaches, AFCC-Net demonstrates the best classification performance under cross-validation. On the soybean adulteration dataset from Yushu City, Jilin Province, China, it achieves an accuracy of 98.67%, a precision of 98.80%, and a recall of 98.33%. On the soybean adulteration dataset from Panjin City, Liaoning Province, China, it achieves an accuracy of 98.33%, a precision of 98.49%, and a recall of 98.05%. Moreover, the model demonstrates strong generalization capability on the test set. The AFCC-Net combined with the e-nose detection method provides a nondestructive solution for soybean adulteration detection, indicating considerable practical application value.
{"title":"A Soybean Adulteration Detection Method Based on Adaptive Feature Compensation Classification Network and Electronic Nose","authors":"Baosheng Wang;Xiaoxue Ping;Yang Liu","doi":"10.1109/JSEN.2025.3629234","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3629234","url":null,"abstract":"Soybean is an important food and economic crop, yet it is often subject to adulteration through the mixing of old and new beans, which threatens food safety and market fairness. This study proposes a soybean adulteration detection method based on an adaptive feature complementary classification network (AFCC-Net) and an electronic nose (e-nose) system. First, the e-nose system collects volatile compound data from soybeans with varying adulteration ratios, and t-distributed stochastic neighbor embedding (t-SNE) is employed to visualize differences. Then, an adaptive feature complementary computing module (AFCCM) is introduced, which integrates local convolutional operations with a global self-attention mechanism to complementarily fuse gas features. Residual connections are incorporated to enhance feature representation, enabling deep feature extraction from gas data. Finally, a lightweight AFCC-Net is designed to identify soybeans with different adulteration ratios. Ablation experiments validate the rationality of the AFCCM design. Compared with lightweight deep learning methods and state-of-the-art gas information classification approaches, AFCC-Net demonstrates the best classification performance under cross-validation. On the soybean adulteration dataset from Yushu City, Jilin Province, China, it achieves an accuracy of 98.67%, a precision of 98.80%, and a recall of 98.33%. On the soybean adulteration dataset from Panjin City, Liaoning Province, China, it achieves an accuracy of 98.33%, a precision of 98.49%, and a recall of 98.05%. Moreover, the model demonstrates strong generalization capability on the test set. The AFCC-Net combined with the e-nose detection method provides a nondestructive solution for soybean adulteration detection, indicating considerable practical application value.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"45084-45092"},"PeriodicalIF":4.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The temperature distribution of the blast furnace (BF) burden surface is crucial to regulate the gas flow distribution and monitor the abnormal furnace conditions. However, it has always been a challenging issue to obtain the burden surface thermal distribution. Therefore, this study proposes a novel endoscopic infrared thermal imaging system for measuring the temperature field of the burden surface. First, aiming at the imaging problem brought by asymmetric viewing angle and large spatial structure in BF, the optical system design indicators suitable for the BF structure are calculated based on geometric optics principle. Second, according to the design indicator, an endoscopic infrared optical system combining an asymmetric reversed telephoto objective lens and a rod lens relay system is designed, which ensures the acquisition of raw infrared radiation in the BF. Subsequently, a distortion calibration method based on corner relocalization and improved covariance matrix estimation is proposed, which accurately acquires imaging parameters by utilizing checkerboard images captured in a defocused state. Finally, temperature measurement verification was conducted on the blackbody furnace and simulated burden surface. Within the range of 600–1000 K, the relative error was within 1%, and the average temperature difference compared with a commercial infrared camera was 0.6991 K.
{"title":"A Novel Endoscopic Infrared Thermal Imaging System for Burden Surface Temperature Field Measurement in Blast Furnace","authors":"Yitian Li;Dong Pan;Zhaohui Jiang;Haoyang Yu;Gui Gui;Weihua Gui","doi":"10.1109/JSEN.2025.3629138","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3629138","url":null,"abstract":"The temperature distribution of the blast furnace (BF) burden surface is crucial to regulate the gas flow distribution and monitor the abnormal furnace conditions. However, it has always been a challenging issue to obtain the burden surface thermal distribution. Therefore, this study proposes a novel endoscopic infrared thermal imaging system for measuring the temperature field of the burden surface. First, aiming at the imaging problem brought by asymmetric viewing angle and large spatial structure in BF, the optical system design indicators suitable for the BF structure are calculated based on geometric optics principle. Second, according to the design indicator, an endoscopic infrared optical system combining an asymmetric reversed telephoto objective lens and a rod lens relay system is designed, which ensures the acquisition of raw infrared radiation in the BF. Subsequently, a distortion calibration method based on corner relocalization and improved covariance matrix estimation is proposed, which accurately acquires imaging parameters by utilizing checkerboard images captured in a defocused state. Finally, temperature measurement verification was conducted on the blackbody furnace and simulated burden surface. Within the range of 600–1000 K, the relative error was within 1%, and the average temperature difference compared with a commercial infrared camera was 0.6991 K.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44973-44983"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-11DOI: 10.1109/JSEN.2025.3627930
Daljeet Singh;Mariella Särestöniemi;Teemu Myllylä
A noninvasive and quantitative microwave method and setup for brain temperature monitoring are proposed in this study. The proposed microwave setup is suitable for wearable devices and prolonged usage without compromising the subject’s comfort. The proposed method is carefully devised for accurate measurements based on two-level feature extraction and is independent of the microwave sensor. A unique dataset creation module and the ordered selection scheme (OSS) based on correlation analysis are proposed to ensure real-time operation with a lightweight algorithm. Finally, the quantitative method is devised using weighted regression analysis on signal attributes selected using OSS. Six thin, small, lightweight microwave sensors are evaluated with different placement strategies for brain temperature monitoring. A realistic phantom model is developed exclusively to test the proposed microwave method and sensors. The dynamic phantom model mimics the dielectric properties of a human head. The correlation and regression analysis performed on data collected from numerous trials showcase that the proposed microwave system can detect minute changes in brain temperature, and its response is analogous to temperature values measured by invasive sensors.
{"title":"Noninvasive and Quantitative Brain Temperature Monitoring Using Wearable Microwave Technique","authors":"Daljeet Singh;Mariella Särestöniemi;Teemu Myllylä","doi":"10.1109/JSEN.2025.3627930","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3627930","url":null,"abstract":"A noninvasive and quantitative microwave method and setup for brain temperature monitoring are proposed in this study. The proposed microwave setup is suitable for wearable devices and prolonged usage without compromising the subject’s comfort. The proposed method is carefully devised for accurate measurements based on two-level feature extraction and is independent of the microwave sensor. A unique dataset creation module and the ordered selection scheme (OSS) based on correlation analysis are proposed to ensure real-time operation with a lightweight algorithm. Finally, the quantitative method is devised using weighted regression analysis on signal attributes selected using OSS. Six thin, small, lightweight microwave sensors are evaluated with different placement strategies for brain temperature monitoring. A realistic phantom model is developed exclusively to test the proposed microwave method and sensors. The dynamic phantom model mimics the dielectric properties of a human head. The correlation and regression analysis performed on data collected from numerous trials showcase that the proposed microwave system can detect minute changes in brain temperature, and its response is analogous to temperature values measured by invasive sensors.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44898-44909"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11241138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision-based grasping detection is extensively utilized in the field of production and manufacturing, leveraging multisource visual data to generate feature maps and achieve robust autonomous grasps. However, significant challenges remain in effectively integrating multisource visual inputs and overcoming catastrophic forgetting in scenarios that vary with time. To address these issues, this article proposes: 1) a three-branch RGB-D fusion module for cross-modal feature synthesis, integrated into the GR-ConvNet framework to optimize antipodal grasping detection; 2) a composite distillation strategy combining perceptual loss with smooth L1 loss to stabilize knowledge retention across sequential tasks; and 3) a robotic grasping detection system driven by RGB-D sensor integration to facilitate autonomous grasping of objects with diverse shapes. Comprehensive evaluations demonstrate state-of-the-art performance of our methods: 98.9% grasping detection accuracy on the Cornell dataset, 89.12% mean grasp accuracy on the final continual learning task, and 82% grasp success rate in real-world robotic trials. Moreover, ablation experiments conducted on our proposed model and the corresponding continual learning approach demonstrate the effectiveness of the three-branch deep fusion (3-BDF) module and the combined distillation loss. To our knowledge, this is the first application of a perceptual loss approach in RGB-D sensor-driven grasping detection tasks designed for continuously changing scenarios. Code and Video are available at: https://github.com/lyxhnu/Cornell-CL
{"title":"Robotic Grasping Detection Based on Continual Learning Using Perceptual Loss and Multibranch Deep Fusion","authors":"Qiaokang Liang;Yaoxin Lai;Songyun Deng;Xinhao Chen;Xiaoyu Yuan;Li Zhou","doi":"10.1109/JSEN.2025.3628829","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628829","url":null,"abstract":"Vision-based grasping detection is extensively utilized in the field of production and manufacturing, leveraging multisource visual data to generate feature maps and achieve robust autonomous grasps. However, significant challenges remain in effectively integrating multisource visual inputs and overcoming catastrophic forgetting in scenarios that vary with time. To address these issues, this article proposes: 1) a three-branch RGB-D fusion module for cross-modal feature synthesis, integrated into the GR-ConvNet framework to optimize antipodal grasping detection; 2) a composite distillation strategy combining perceptual loss with smooth L1 loss to stabilize knowledge retention across sequential tasks; and 3) a robotic grasping detection system driven by RGB-D sensor integration to facilitate autonomous grasping of objects with diverse shapes. Comprehensive evaluations demonstrate state-of-the-art performance of our methods: 98.9% grasping detection accuracy on the Cornell dataset, 89.12% mean grasp accuracy on the final continual learning task, and 82% grasp success rate in real-world robotic trials. Moreover, ablation experiments conducted on our proposed model and the corresponding continual learning approach demonstrate the effectiveness of the three-branch deep fusion (3-BDF) module and the combined distillation loss. To our knowledge, this is the first application of a perceptual loss approach in RGB-D sensor-driven grasping detection tasks designed for continuously changing scenarios. Code and Video are available at: <uri>https://github.com/lyxhnu/Cornell-CL</uri>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44962-44972"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/JSEN.2025.3628663
Yuanzhe Li;Steffen Müller
Pedestrian crossing intention prediction is crucial for autonomous vehicles (AVs), enabling timely reactions to prevent potential accidents, especially in urban areas. The prediction task is challenging because the pedestrian’s behavior is highly diverse and influenced by various environmental and social factors. Although various networks have shown the potential to exploit complementary cues through multimodal fusion in this task, certain issues remain unresolved. First, critical contextual information, such as geometric depth and its associated modalities, has not been adequately explored. Second, the effective multimodal fusion strategies—particularly in terms of fusion scales and fusion order—remain underexplored. To address these limitations, a multimodal Transformer with cross-modality guided attention (MTC) is proposed. MTC fuses seven visual and motion modality features extracted from multiple Transformer-based encoding modules, incorporating depth maps (DMs) as a new modality to supplement the model’s understanding of scene geometry and pedestrian-centric distance information. MTC follows a multimodal fusion strategy in the spatial–modality–temporal order. Specifically, a novel cross-modality guided attention (CMGA) mechanism is designed to capture complementary feature maps through comprehensive interactions between coregistered visual modalities. Additionally, intermodal attention (IMA) and Transformer-based temporal feature fusion (TFF) are designed to effectively facilitate cross-modal interaction and capture temporal dependencies. Extensive evaluations on the JAAD dataset validate the proposed network’s effectiveness, outperforming the state-of-the-art (SOTA) methods.
{"title":"MTC: Multimodal Transformer With Cross-Modality Guided Attention for Pedestrian Crossing Intention Prediction","authors":"Yuanzhe Li;Steffen Müller","doi":"10.1109/JSEN.2025.3628663","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628663","url":null,"abstract":"Pedestrian crossing intention prediction is crucial for autonomous vehicles (AVs), enabling timely reactions to prevent potential accidents, especially in urban areas. The prediction task is challenging because the pedestrian’s behavior is highly diverse and influenced by various environmental and social factors. Although various networks have shown the potential to exploit complementary cues through multimodal fusion in this task, certain issues remain unresolved. First, critical contextual information, such as geometric depth and its associated modalities, has not been adequately explored. Second, the effective multimodal fusion strategies—particularly in terms of fusion scales and fusion order—remain underexplored. To address these limitations, a multimodal Transformer with cross-modality guided attention (MTC) is proposed. MTC fuses seven visual and motion modality features extracted from multiple Transformer-based encoding modules, incorporating depth maps (DMs) as a new modality to supplement the model’s understanding of scene geometry and pedestrian-centric distance information. MTC follows a multimodal fusion strategy in the spatial–modality–temporal order. Specifically, a novel cross-modality guided attention (CMGA) mechanism is designed to capture complementary feature maps through comprehensive interactions between coregistered visual modalities. Additionally, intermodal attention (IMA) and Transformer-based temporal feature fusion (TFF) are designed to effectively facilitate cross-modal interaction and capture temporal dependencies. Extensive evaluations on the JAAD dataset validate the proposed network’s effectiveness, outperforming the state-of-the-art (SOTA) methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44929-44939"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/JSEN.2025.3628713
Bin Chen;Jinlong Zhang;Junhai Yang;Bohao Pan
The volumetric proportions of ice crystals, water, and air within snowpack are highly susceptible to environmental disturbances, leading to multistate phase transitions, such as dry snow, wet snow, and slush. This study introduces a new method for runway snow identification using planar electrode impedance detection. Based on dielectric polarization theory, the effects of water content (0%–30% by volume) and density (100–600 kg/m3) on the complex permittivity of snow are analyzed. A multidimensional identification space is established using the sensitive excitation bands identified at 20 and 100 kHz to accurately classify snow types. A multidimensional identification space is defined to accurately classify snow types. Electrode design is optimized for runway conditions, and a calibration method is applied to mitigate impedance drift caused by interference. Field tests show the developed contact sensor achieves 85% identification accuracy. This work provides a new technique for real-time, automated runway snow condition monitoring, aligning with global reporting format (GRF) standards.
{"title":"Runway Snow State Identification Method Based on Impedance Characteristic Differences","authors":"Bin Chen;Jinlong Zhang;Junhai Yang;Bohao Pan","doi":"10.1109/JSEN.2025.3628713","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628713","url":null,"abstract":"The volumetric proportions of ice crystals, water, and air within snowpack are highly susceptible to environmental disturbances, leading to multistate phase transitions, such as dry snow, wet snow, and slush. This study introduces a new method for runway snow identification using planar electrode impedance detection. Based on dielectric polarization theory, the effects of water content (0%–30% by volume) and density (100–600 kg/m3) on the complex permittivity of snow are analyzed. A multidimensional identification space is established using the sensitive excitation bands identified at 20 and 100 kHz to accurately classify snow types. A multidimensional identification space is defined to accurately classify snow types. Electrode design is optimized for runway conditions, and a calibration method is applied to mitigate impedance drift caused by interference. Field tests show the developed contact sensor achieves 85% identification accuracy. This work provides a new technique for real-time, automated runway snow condition monitoring, aligning with global reporting format (GRF) standards.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44940-44950"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a real-time errorcompensated multisensor acquisition system for a self-weight multiphysics cone penetration apparatus that performs marine geotechnical investigation. Conventional methods such as standard penetration test (SPT) and cone penetration test (CPT) provide reliable, high-resolution data but require dedicated offshore vessels, which are expensive to operate. To address these limitations, the apparatus with the proposed acquisition system has been developed for a lightweight and cost-effective solution. The proposed acquisition system drives hydro-compensated dual pressure transducers, strain gauges with Wheatstone bridges, and an inertial measurement unit (IMU) to obtain accurate geotechnical parameters as well as determine soil strength and stiffness properties during dynamic penetration. Additionally, the acquisition system uses an RS-485 communication protocol to transmit data over long distances up to 1.2 km at a data rate up to 100 kb/s. A 10.7 V lithium-ion (Li-ion) battery powers the proposed system, generating supply voltages of 9, 5, and 2 V through onboard voltage regulators to drive analog and digital subsystems. The proposed apparatus was verified to acquire reliable geotechnical parameters through field tests, providing a viable solution for offshore wind power development and submarine cable installations.
{"title":"A Real-Time Error-Compensated Multisensor Acquisition System for Marine Geotechnical Investigation","authors":"Seung-Beom Ku;Hyungjin Jung;Hyungjin Cho;Jiseok Oh;Jang-Un Kim;JunA Lee;Sungjun Cho;Jongmuk Won;Junghee Park;Hyunwook Choo;Hyung-Min Lee","doi":"10.1109/JSEN.2025.3628740","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628740","url":null,"abstract":"This article proposes a real-time errorcompensated multisensor acquisition system for a self-weight multiphysics cone penetration apparatus that performs marine geotechnical investigation. Conventional methods such as standard penetration test (SPT) and cone penetration test (CPT) provide reliable, high-resolution data but require dedicated offshore vessels, which are expensive to operate. To address these limitations, the apparatus with the proposed acquisition system has been developed for a lightweight and cost-effective solution. The proposed acquisition system drives hydro-compensated dual pressure transducers, strain gauges with Wheatstone bridges, and an inertial measurement unit (IMU) to obtain accurate geotechnical parameters as well as determine soil strength and stiffness properties during dynamic penetration. Additionally, the acquisition system uses an RS-485 communication protocol to transmit data over long distances up to 1.2 km at a data rate up to 100 kb/s. A 10.7 V lithium-ion (Li-ion) battery powers the proposed system, generating supply voltages of 9, 5, and 2 V through onboard voltage regulators to drive analog and digital subsystems. The proposed apparatus was verified to acquire reliable geotechnical parameters through field tests, providing a viable solution for offshore wind power development and submarine cable installations.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44951-44961"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Locating and tracking targets in indoor environments is a challenging field of research. The complexity and variability of the environment limit the suitability of many technologies for this application. In this context, mmWave frequency modulated continuous wave (FMCW) radars can prove to be valuable sensors when combined with deep learning (DL) techniques, in order to extend performance in target locating and tracking. This article presents an original approach to locate and track moving targets in indoor environments, based on a YOLOv3 DL network that can be applied to radar data. To quantify the performance of the proposed method, here named mmTracking, tests were designed in accordance with the ISO/IEC 18305:2016 reference standard. The results show a mean error in localization of 0.39 m with a variance of 0.01 m2, and a root mean square error (RMSE) in the tracking of 0.40 m.
{"title":"mmTracking: A DL-Based mmWave RADAR Data Processing Algorithm for Indoor People Tracking","authors":"Michela Raimondi;Gianluca Ciattaglia;Antonio Nocera;Maria Gardano;Linda Senigagliesi;Susanna Spinsante;Ennio Gambi","doi":"10.1109/JSEN.2025.3628185","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628185","url":null,"abstract":"Locating and tracking targets in indoor environments is a challenging field of research. The complexity and variability of the environment limit the suitability of many technologies for this application. In this context, mmWave frequency modulated continuous wave (FMCW) radars can prove to be valuable sensors when combined with deep learning (DL) techniques, in order to extend performance in target locating and tracking. This article presents an original approach to locate and track moving targets in indoor environments, based on a YOLOv3 DL network that can be applied to radar data. To quantify the performance of the proposed method, here named mmTracking, tests were designed in accordance with the ISO/IEC 18305:2016 reference standard. The results show a mean error in localization of 0.39 m with a variance of 0.01 m2, and a root mean square error (RMSE) in the tracking of 0.40 m.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"45071-45083"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1109/JSEN.2025.3628211
Vincenzo Saroli;Emiliano Schena;Carlo Massaroni
In recent years, additive manufacturing techniques, particularly 3-D printing methods like fused deposition modeling (FDM), have been increasingly explored for the development of systems for physiological monitoring, such as respiratory activity and joint kinematics, while retaining advantages such as rapid prototyping, low costs, and high customizability. This study presents the design, fabrication, and metrological characterization of single-layer strain bare sensor (BS) produced via FDM, with a thickness of only 0.15 mm, composed of a thermoplastic polyurethane (TPU) matrix filled with carbon black (CB) particles. In addition, the work investigates the impact of integrating the BS into flexible substrates—specifically kinesiology tape-integrated sensor (TS) and silicone-integrated sensor (SS)—to enhance mechanical robustness, a factor often neglected in existing literature. Electromechanical characterization was performed through quasi-static and cyclic tensile tests up to 5% strain. The resistance response exhibited nonlinear behavior, with maximum relative resistance changes of 40%, 38%, and 30% for the BS, TS, and SS configurations, respectively. The highest gauge factor (GF) of -14.7 was observed for the TS at 1% strain. During cyclic loading/unloading tests, all configurations demonstrated low hysteresis errors (~4%), even at high frequencies (90 cycles/min), despite the intrinsic piezoresistive nature of the sensors. In hygrothermal characterization, while substrate integration did not significantly mitigate the effect of temperature, silicone encapsulation proved effective in reducing humidity sensitivity, with the SS configuration showing only a 4% variation compared to ~13% for BS and TS. Finally, pilot tests conducted on a healthy volunteer demonstrated the feasibility of using the developed sensors for respiratory monitoring and joint kinematics assessment.
{"title":"Fabrication and Metrological Characterization of Bare and Integrated 3-D-Printed Single-Layer CB-TPU Strain Sensors","authors":"Vincenzo Saroli;Emiliano Schena;Carlo Massaroni","doi":"10.1109/JSEN.2025.3628211","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628211","url":null,"abstract":"In recent years, additive manufacturing techniques, particularly 3-D printing methods like fused deposition modeling (FDM), have been increasingly explored for the development of systems for physiological monitoring, such as respiratory activity and joint kinematics, while retaining advantages such as rapid prototyping, low costs, and high customizability. This study presents the design, fabrication, and metrological characterization of single-layer strain bare sensor (BS) produced via FDM, with a thickness of only 0.15 mm, composed of a thermoplastic polyurethane (TPU) matrix filled with carbon black (CB) particles. In addition, the work investigates the impact of integrating the BS into flexible substrates—specifically kinesiology tape-integrated sensor (TS) and silicone-integrated sensor (SS)—to enhance mechanical robustness, a factor often neglected in existing literature. Electromechanical characterization was performed through quasi-static and cyclic tensile tests up to 5% strain. The resistance response exhibited nonlinear behavior, with maximum relative resistance changes of 40%, 38%, and 30% for the BS, TS, and SS configurations, respectively. The highest gauge factor (GF) of -14.7 was observed for the TS at 1% strain. During cyclic loading/unloading tests, all configurations demonstrated low hysteresis errors (~4%), even at high frequencies (90 cycles/min), despite the intrinsic piezoresistive nature of the sensors. In hygrothermal characterization, while substrate integration did not significantly mitigate the effect of temperature, silicone encapsulation proved effective in reducing humidity sensitivity, with the SS configuration showing only a 4% variation compared to ~13% for BS and TS. Finally, pilot tests conducted on a healthy volunteer demonstrated the feasibility of using the developed sensors for respiratory monitoring and joint kinematics assessment.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44919-44928"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1109/JSEN.2025.3627958
Biswajit Haldar;Boby George;M. Arul Muthiah;M. A. Atmanand
The high cost and power requirements of the acoustic Doppler velocimeter (ADV) restrict its use. This type of current meter is also susceptible to biofouling. A recently reported innovative approach where the wide range of ocean current speed is estimated from the buoy measurement data, such as load cell, GPS, anemometer, and wave sensor, using the advanced machine learning (ML) technique, is a viable option for ocean current speed measurement with advantages such as lower power requirements, lower cost, and resistance to biofouling. However, the reported method is limited to the measurement of current speed alone. Although the speed of ocean currents has been widely studied, the direction of ocean currents is equally significant for various scientific, economic, and environmental applications. In this article, an attempt is made to estimate both the speed and direction of the surface ocean current from buoy sensor data using ML. The performance of the ML models is evaluated and validated using buoy data collected from the northern Bay of Bengal for the duration of December 2019 to February 2021. This study compares four different ML models, ultimately identifying the random forest (RF) as the best-performing model for the estimation of current speed and direction. The study shows a correlation value of 0.94 and a root mean square error (RMSE) of 0.065 m/s between the observed and estimated current speed for the entire range of measurements (0–1.56 m/s). On the other hand, the correlation between the estimated and observed current direction is found to be 0.98 with an RMSE value of 13.320 for the measurement range of 0.4–1.56 m/s. The result shows that the model is capable of reliably estimating the current speed and direction with significant accuracy. However, the accuracy of the speed estimation is good for the full range of current, whereas the estimation of the current direction is good for the current above a threshold value of 0.4 m/s.
{"title":"Performance Evaluation of ML Models for Ocean Current Speed and Direction Estimation From Buoy Sensor Data","authors":"Biswajit Haldar;Boby George;M. Arul Muthiah;M. A. Atmanand","doi":"10.1109/JSEN.2025.3627958","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3627958","url":null,"abstract":"The high cost and power requirements of the acoustic Doppler velocimeter (ADV) restrict its use. This type of current meter is also susceptible to biofouling. A recently reported innovative approach where the wide range of ocean current speed is estimated from the buoy measurement data, such as load cell, GPS, anemometer, and wave sensor, using the advanced machine learning (ML) technique, is a viable option for ocean current speed measurement with advantages such as lower power requirements, lower cost, and resistance to biofouling. However, the reported method is limited to the measurement of current speed alone. Although the speed of ocean currents has been widely studied, the direction of ocean currents is equally significant for various scientific, economic, and environmental applications. In this article, an attempt is made to estimate both the speed and direction of the surface ocean current from buoy sensor data using ML. The performance of the ML models is evaluated and validated using buoy data collected from the northern Bay of Bengal for the duration of December 2019 to February 2021. This study compares four different ML models, ultimately identifying the random forest (RF) as the best-performing model for the estimation of current speed and direction. The study shows a correlation value of 0.94 and a root mean square error (RMSE) of 0.065 m/s between the observed and estimated current speed for the entire range of measurements (0–1.56 m/s). On the other hand, the correlation between the estimated and observed current direction is found to be 0.98 with an RMSE value of 13.320 for the measurement range of 0.4–1.56 m/s. The result shows that the model is capable of reliably estimating the current speed and direction with significant accuracy. However, the accuracy of the speed estimation is good for the full range of current, whereas the estimation of the current direction is good for the current above a threshold value of 0.4 m/s.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44910-44918"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}