Pub Date : 2025-11-13DOI: 10.1109/JSEN.2025.3610164
Xueer Wang;Qi Teng
Presents corrections to the paper, (Corrections to “CSFO: A Category-Specific Flattening Optimization Method for Sensor-Based Long-Tailed Activity Recognition”).
提出了对论文的更正,(对“CSFO:基于传感器的长尾活动识别的特定类别平坦化优化方法”的更正)。
{"title":"Corrections to “CSFO: A Category-Specific Flattening Optimization Method for Sensor-Based Long-Tailed Activity Recognition”","authors":"Xueer Wang;Qi Teng","doi":"10.1109/JSEN.2025.3610164","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3610164","url":null,"abstract":"Presents corrections to the paper, (Corrections to “CSFO: A Category-Specific Flattening Optimization Method for Sensor-Based Long-Tailed Activity Recognition”).","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42413-42415"},"PeriodicalIF":4.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11245647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1109/JSEN.2025.3629733
You Tan;Kechen Song;Hongshu Chen;Yu Zhang;Yunhui Yan
The detection of internal surface defects in cold-drawn pipes is challenging. In recent years, as the production demands for cold-drawn steel pipes have steadily grown, there has been an urgent need for an efficient detection approach that balances accuracy and real-time performance in industrial environments. Although several existing deep learning-based methods have achieved high accuracy in surface defect detection, they often need substantial computational costs to extract rich feature representations, which inevitably slows down the inference process and leads to low detection efficiency. Moreover, internal defects of cold-drawn pipes typically exhibit challenges, which may further degrade the performance of existing models. To address these challenges, we propose a lightweight perception enhancement network (LPENet) to effectively balance efficiency and accuracy. Specifically, we introduce a progressive feature extraction (PFE) backbone that enhances contextual perception from local to global scales. Furthermore, we design amultiscale context enhancement (MCE) module to enrich the feature representation and a boundary-enhanced aggregation (BEA) module to strengthen fine-grained feature awareness. In addition, we propose a perception-guided fusion (PGF) strategy to facilitate interaction between shallow and deep features. We deploy LPENet in combination with a pipe internal surface detection (PISD) robot, achieving wireless and efficient defect detection in real-world steel pipe factories. In extensive experiments on the SSP2000 dataset, LPENet achieves the best balance between detection accuracy and efficiency. The source code is publicly available at https://github.com/VDT-2048/LPENet.
{"title":"A Lightweight Perception Enhancement Network for Real-Time and Accurate Internal Surface Defect Detection of Cold-Drawn Steel Pipes","authors":"You Tan;Kechen Song;Hongshu Chen;Yu Zhang;Yunhui Yan","doi":"10.1109/JSEN.2025.3629733","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3629733","url":null,"abstract":"The detection of internal surface defects in cold-drawn pipes is challenging. In recent years, as the production demands for cold-drawn steel pipes have steadily grown, there has been an urgent need for an efficient detection approach that balances accuracy and real-time performance in industrial environments. Although several existing deep learning-based methods have achieved high accuracy in surface defect detection, they often need substantial computational costs to extract rich feature representations, which inevitably slows down the inference process and leads to low detection efficiency. Moreover, internal defects of cold-drawn pipes typically exhibit challenges, which may further degrade the performance of existing models. To address these challenges, we propose a lightweight perception enhancement network (LPENet) to effectively balance efficiency and accuracy. Specifically, we introduce a progressive feature extraction (PFE) backbone that enhances contextual perception from local to global scales. Furthermore, we design amultiscale context enhancement (MCE) module to enrich the feature representation and a boundary-enhanced aggregation (BEA) module to strengthen fine-grained feature awareness. In addition, we propose a perception-guided fusion (PGF) strategy to facilitate interaction between shallow and deep features. We deploy LPENet in combination with a pipe internal surface detection (PISD) robot, achieving wireless and efficient defect detection in real-world steel pipe factories. In extensive experiments on the SSP2000 dataset, LPENet achieves the best balance between detection accuracy and efficiency. The source code is publicly available at <uri>https://github.com/VDT-2048/LPENet</uri>.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 1","pages":"1383-1394"},"PeriodicalIF":4.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145852514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12DOI: 10.1109/JSEN.2025.3629234
Baosheng Wang;Xiaoxue Ping;Yang Liu
Soybean is an important food and economic crop, yet it is often subject to adulteration through the mixing of old and new beans, which threatens food safety and market fairness. This study proposes a soybean adulteration detection method based on an adaptive feature complementary classification network (AFCC-Net) and an electronic nose (e-nose) system. First, the e-nose system collects volatile compound data from soybeans with varying adulteration ratios, and t-distributed stochastic neighbor embedding (t-SNE) is employed to visualize differences. Then, an adaptive feature complementary computing module (AFCCM) is introduced, which integrates local convolutional operations with a global self-attention mechanism to complementarily fuse gas features. Residual connections are incorporated to enhance feature representation, enabling deep feature extraction from gas data. Finally, a lightweight AFCC-Net is designed to identify soybeans with different adulteration ratios. Ablation experiments validate the rationality of the AFCCM design. Compared with lightweight deep learning methods and state-of-the-art gas information classification approaches, AFCC-Net demonstrates the best classification performance under cross-validation. On the soybean adulteration dataset from Yushu City, Jilin Province, China, it achieves an accuracy of 98.67%, a precision of 98.80%, and a recall of 98.33%. On the soybean adulteration dataset from Panjin City, Liaoning Province, China, it achieves an accuracy of 98.33%, a precision of 98.49%, and a recall of 98.05%. Moreover, the model demonstrates strong generalization capability on the test set. The AFCC-Net combined with the e-nose detection method provides a nondestructive solution for soybean adulteration detection, indicating considerable practical application value.
{"title":"A Soybean Adulteration Detection Method Based on Adaptive Feature Compensation Classification Network and Electronic Nose","authors":"Baosheng Wang;Xiaoxue Ping;Yang Liu","doi":"10.1109/JSEN.2025.3629234","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3629234","url":null,"abstract":"Soybean is an important food and economic crop, yet it is often subject to adulteration through the mixing of old and new beans, which threatens food safety and market fairness. This study proposes a soybean adulteration detection method based on an adaptive feature complementary classification network (AFCC-Net) and an electronic nose (e-nose) system. First, the e-nose system collects volatile compound data from soybeans with varying adulteration ratios, and t-distributed stochastic neighbor embedding (t-SNE) is employed to visualize differences. Then, an adaptive feature complementary computing module (AFCCM) is introduced, which integrates local convolutional operations with a global self-attention mechanism to complementarily fuse gas features. Residual connections are incorporated to enhance feature representation, enabling deep feature extraction from gas data. Finally, a lightweight AFCC-Net is designed to identify soybeans with different adulteration ratios. Ablation experiments validate the rationality of the AFCCM design. Compared with lightweight deep learning methods and state-of-the-art gas information classification approaches, AFCC-Net demonstrates the best classification performance under cross-validation. On the soybean adulteration dataset from Yushu City, Jilin Province, China, it achieves an accuracy of 98.67%, a precision of 98.80%, and a recall of 98.33%. On the soybean adulteration dataset from Panjin City, Liaoning Province, China, it achieves an accuracy of 98.33%, a precision of 98.49%, and a recall of 98.05%. Moreover, the model demonstrates strong generalization capability on the test set. The AFCC-Net combined with the e-nose detection method provides a nondestructive solution for soybean adulteration detection, indicating considerable practical application value.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"45084-45092"},"PeriodicalIF":4.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The temperature distribution of the blast furnace (BF) burden surface is crucial to regulate the gas flow distribution and monitor the abnormal furnace conditions. However, it has always been a challenging issue to obtain the burden surface thermal distribution. Therefore, this study proposes a novel endoscopic infrared thermal imaging system for measuring the temperature field of the burden surface. First, aiming at the imaging problem brought by asymmetric viewing angle and large spatial structure in BF, the optical system design indicators suitable for the BF structure are calculated based on geometric optics principle. Second, according to the design indicator, an endoscopic infrared optical system combining an asymmetric reversed telephoto objective lens and a rod lens relay system is designed, which ensures the acquisition of raw infrared radiation in the BF. Subsequently, a distortion calibration method based on corner relocalization and improved covariance matrix estimation is proposed, which accurately acquires imaging parameters by utilizing checkerboard images captured in a defocused state. Finally, temperature measurement verification was conducted on the blackbody furnace and simulated burden surface. Within the range of 600–1000 K, the relative error was within 1%, and the average temperature difference compared with a commercial infrared camera was 0.6991 K.
{"title":"A Novel Endoscopic Infrared Thermal Imaging System for Burden Surface Temperature Field Measurement in Blast Furnace","authors":"Yitian Li;Dong Pan;Zhaohui Jiang;Haoyang Yu;Gui Gui;Weihua Gui","doi":"10.1109/JSEN.2025.3629138","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3629138","url":null,"abstract":"The temperature distribution of the blast furnace (BF) burden surface is crucial to regulate the gas flow distribution and monitor the abnormal furnace conditions. However, it has always been a challenging issue to obtain the burden surface thermal distribution. Therefore, this study proposes a novel endoscopic infrared thermal imaging system for measuring the temperature field of the burden surface. First, aiming at the imaging problem brought by asymmetric viewing angle and large spatial structure in BF, the optical system design indicators suitable for the BF structure are calculated based on geometric optics principle. Second, according to the design indicator, an endoscopic infrared optical system combining an asymmetric reversed telephoto objective lens and a rod lens relay system is designed, which ensures the acquisition of raw infrared radiation in the BF. Subsequently, a distortion calibration method based on corner relocalization and improved covariance matrix estimation is proposed, which accurately acquires imaging parameters by utilizing checkerboard images captured in a defocused state. Finally, temperature measurement verification was conducted on the blackbody furnace and simulated burden surface. Within the range of 600–1000 K, the relative error was within 1%, and the average temperature difference compared with a commercial infrared camera was 0.6991 K.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44973-44983"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-11DOI: 10.1109/JSEN.2025.3627930
Daljeet Singh;Mariella Särestöniemi;Teemu Myllylä
A noninvasive and quantitative microwave method and setup for brain temperature monitoring are proposed in this study. The proposed microwave setup is suitable for wearable devices and prolonged usage without compromising the subject’s comfort. The proposed method is carefully devised for accurate measurements based on two-level feature extraction and is independent of the microwave sensor. A unique dataset creation module and the ordered selection scheme (OSS) based on correlation analysis are proposed to ensure real-time operation with a lightweight algorithm. Finally, the quantitative method is devised using weighted regression analysis on signal attributes selected using OSS. Six thin, small, lightweight microwave sensors are evaluated with different placement strategies for brain temperature monitoring. A realistic phantom model is developed exclusively to test the proposed microwave method and sensors. The dynamic phantom model mimics the dielectric properties of a human head. The correlation and regression analysis performed on data collected from numerous trials showcase that the proposed microwave system can detect minute changes in brain temperature, and its response is analogous to temperature values measured by invasive sensors.
{"title":"Noninvasive and Quantitative Brain Temperature Monitoring Using Wearable Microwave Technique","authors":"Daljeet Singh;Mariella Särestöniemi;Teemu Myllylä","doi":"10.1109/JSEN.2025.3627930","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3627930","url":null,"abstract":"A noninvasive and quantitative microwave method and setup for brain temperature monitoring are proposed in this study. The proposed microwave setup is suitable for wearable devices and prolonged usage without compromising the subject’s comfort. The proposed method is carefully devised for accurate measurements based on two-level feature extraction and is independent of the microwave sensor. A unique dataset creation module and the ordered selection scheme (OSS) based on correlation analysis are proposed to ensure real-time operation with a lightweight algorithm. Finally, the quantitative method is devised using weighted regression analysis on signal attributes selected using OSS. Six thin, small, lightweight microwave sensors are evaluated with different placement strategies for brain temperature monitoring. A realistic phantom model is developed exclusively to test the proposed microwave method and sensors. The dynamic phantom model mimics the dielectric properties of a human head. The correlation and regression analysis performed on data collected from numerous trials showcase that the proposed microwave system can detect minute changes in brain temperature, and its response is analogous to temperature values measured by invasive sensors.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44898-44909"},"PeriodicalIF":4.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11241138","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision-based grasping detection is extensively utilized in the field of production and manufacturing, leveraging multisource visual data to generate feature maps and achieve robust autonomous grasps. However, significant challenges remain in effectively integrating multisource visual inputs and overcoming catastrophic forgetting in scenarios that vary with time. To address these issues, this article proposes: 1) a three-branch RGB-D fusion module for cross-modal feature synthesis, integrated into the GR-ConvNet framework to optimize antipodal grasping detection; 2) a composite distillation strategy combining perceptual loss with smooth L1 loss to stabilize knowledge retention across sequential tasks; and 3) a robotic grasping detection system driven by RGB-D sensor integration to facilitate autonomous grasping of objects with diverse shapes. Comprehensive evaluations demonstrate state-of-the-art performance of our methods: 98.9% grasping detection accuracy on the Cornell dataset, 89.12% mean grasp accuracy on the final continual learning task, and 82% grasp success rate in real-world robotic trials. Moreover, ablation experiments conducted on our proposed model and the corresponding continual learning approach demonstrate the effectiveness of the three-branch deep fusion (3-BDF) module and the combined distillation loss. To our knowledge, this is the first application of a perceptual loss approach in RGB-D sensor-driven grasping detection tasks designed for continuously changing scenarios. Code and Video are available at: https://github.com/lyxhnu/Cornell-CL
{"title":"Robotic Grasping Detection Based on Continual Learning Using Perceptual Loss and Multibranch Deep Fusion","authors":"Qiaokang Liang;Yaoxin Lai;Songyun Deng;Xinhao Chen;Xiaoyu Yuan;Li Zhou","doi":"10.1109/JSEN.2025.3628829","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628829","url":null,"abstract":"Vision-based grasping detection is extensively utilized in the field of production and manufacturing, leveraging multisource visual data to generate feature maps and achieve robust autonomous grasps. However, significant challenges remain in effectively integrating multisource visual inputs and overcoming catastrophic forgetting in scenarios that vary with time. To address these issues, this article proposes: 1) a three-branch RGB-D fusion module for cross-modal feature synthesis, integrated into the GR-ConvNet framework to optimize antipodal grasping detection; 2) a composite distillation strategy combining perceptual loss with smooth L1 loss to stabilize knowledge retention across sequential tasks; and 3) a robotic grasping detection system driven by RGB-D sensor integration to facilitate autonomous grasping of objects with diverse shapes. Comprehensive evaluations demonstrate state-of-the-art performance of our methods: 98.9% grasping detection accuracy on the Cornell dataset, 89.12% mean grasp accuracy on the final continual learning task, and 82% grasp success rate in real-world robotic trials. Moreover, ablation experiments conducted on our proposed model and the corresponding continual learning approach demonstrate the effectiveness of the three-branch deep fusion (3-BDF) module and the combined distillation loss. To our knowledge, this is the first application of a perceptual loss approach in RGB-D sensor-driven grasping detection tasks designed for continuously changing scenarios. Code and Video are available at: <uri>https://github.com/lyxhnu/Cornell-CL</uri>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44962-44972"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/JSEN.2025.3628663
Yuanzhe Li;Steffen Müller
Pedestrian crossing intention prediction is crucial for autonomous vehicles (AVs), enabling timely reactions to prevent potential accidents, especially in urban areas. The prediction task is challenging because the pedestrian’s behavior is highly diverse and influenced by various environmental and social factors. Although various networks have shown the potential to exploit complementary cues through multimodal fusion in this task, certain issues remain unresolved. First, critical contextual information, such as geometric depth and its associated modalities, has not been adequately explored. Second, the effective multimodal fusion strategies—particularly in terms of fusion scales and fusion order—remain underexplored. To address these limitations, a multimodal Transformer with cross-modality guided attention (MTC) is proposed. MTC fuses seven visual and motion modality features extracted from multiple Transformer-based encoding modules, incorporating depth maps (DMs) as a new modality to supplement the model’s understanding of scene geometry and pedestrian-centric distance information. MTC follows a multimodal fusion strategy in the spatial–modality–temporal order. Specifically, a novel cross-modality guided attention (CMGA) mechanism is designed to capture complementary feature maps through comprehensive interactions between coregistered visual modalities. Additionally, intermodal attention (IMA) and Transformer-based temporal feature fusion (TFF) are designed to effectively facilitate cross-modal interaction and capture temporal dependencies. Extensive evaluations on the JAAD dataset validate the proposed network’s effectiveness, outperforming the state-of-the-art (SOTA) methods.
{"title":"MTC: Multimodal Transformer With Cross-Modality Guided Attention for Pedestrian Crossing Intention Prediction","authors":"Yuanzhe Li;Steffen Müller","doi":"10.1109/JSEN.2025.3628663","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628663","url":null,"abstract":"Pedestrian crossing intention prediction is crucial for autonomous vehicles (AVs), enabling timely reactions to prevent potential accidents, especially in urban areas. The prediction task is challenging because the pedestrian’s behavior is highly diverse and influenced by various environmental and social factors. Although various networks have shown the potential to exploit complementary cues through multimodal fusion in this task, certain issues remain unresolved. First, critical contextual information, such as geometric depth and its associated modalities, has not been adequately explored. Second, the effective multimodal fusion strategies—particularly in terms of fusion scales and fusion order—remain underexplored. To address these limitations, a multimodal Transformer with cross-modality guided attention (MTC) is proposed. MTC fuses seven visual and motion modality features extracted from multiple Transformer-based encoding modules, incorporating depth maps (DMs) as a new modality to supplement the model’s understanding of scene geometry and pedestrian-centric distance information. MTC follows a multimodal fusion strategy in the spatial–modality–temporal order. Specifically, a novel cross-modality guided attention (CMGA) mechanism is designed to capture complementary feature maps through comprehensive interactions between coregistered visual modalities. Additionally, intermodal attention (IMA) and Transformer-based temporal feature fusion (TFF) are designed to effectively facilitate cross-modal interaction and capture temporal dependencies. Extensive evaluations on the JAAD dataset validate the proposed network’s effectiveness, outperforming the state-of-the-art (SOTA) methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44929-44939"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/JSEN.2025.3628713
Bin Chen;Jinlong Zhang;Junhai Yang;Bohao Pan
The volumetric proportions of ice crystals, water, and air within snowpack are highly susceptible to environmental disturbances, leading to multistate phase transitions, such as dry snow, wet snow, and slush. This study introduces a new method for runway snow identification using planar electrode impedance detection. Based on dielectric polarization theory, the effects of water content (0%–30% by volume) and density (100–600 kg/m3) on the complex permittivity of snow are analyzed. A multidimensional identification space is established using the sensitive excitation bands identified at 20 and 100 kHz to accurately classify snow types. A multidimensional identification space is defined to accurately classify snow types. Electrode design is optimized for runway conditions, and a calibration method is applied to mitigate impedance drift caused by interference. Field tests show the developed contact sensor achieves 85% identification accuracy. This work provides a new technique for real-time, automated runway snow condition monitoring, aligning with global reporting format (GRF) standards.
{"title":"Runway Snow State Identification Method Based on Impedance Characteristic Differences","authors":"Bin Chen;Jinlong Zhang;Junhai Yang;Bohao Pan","doi":"10.1109/JSEN.2025.3628713","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628713","url":null,"abstract":"The volumetric proportions of ice crystals, water, and air within snowpack are highly susceptible to environmental disturbances, leading to multistate phase transitions, such as dry snow, wet snow, and slush. This study introduces a new method for runway snow identification using planar electrode impedance detection. Based on dielectric polarization theory, the effects of water content (0%–30% by volume) and density (100–600 kg/m3) on the complex permittivity of snow are analyzed. A multidimensional identification space is established using the sensitive excitation bands identified at 20 and 100 kHz to accurately classify snow types. A multidimensional identification space is defined to accurately classify snow types. Electrode design is optimized for runway conditions, and a calibration method is applied to mitigate impedance drift caused by interference. Field tests show the developed contact sensor achieves 85% identification accuracy. This work provides a new technique for real-time, automated runway snow condition monitoring, aligning with global reporting format (GRF) standards.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44940-44950"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article proposes a real-time errorcompensated multisensor acquisition system for a self-weight multiphysics cone penetration apparatus that performs marine geotechnical investigation. Conventional methods such as standard penetration test (SPT) and cone penetration test (CPT) provide reliable, high-resolution data but require dedicated offshore vessels, which are expensive to operate. To address these limitations, the apparatus with the proposed acquisition system has been developed for a lightweight and cost-effective solution. The proposed acquisition system drives hydro-compensated dual pressure transducers, strain gauges with Wheatstone bridges, and an inertial measurement unit (IMU) to obtain accurate geotechnical parameters as well as determine soil strength and stiffness properties during dynamic penetration. Additionally, the acquisition system uses an RS-485 communication protocol to transmit data over long distances up to 1.2 km at a data rate up to 100 kb/s. A 10.7 V lithium-ion (Li-ion) battery powers the proposed system, generating supply voltages of 9, 5, and 2 V through onboard voltage regulators to drive analog and digital subsystems. The proposed apparatus was verified to acquire reliable geotechnical parameters through field tests, providing a viable solution for offshore wind power development and submarine cable installations.
{"title":"A Real-Time Error-Compensated Multisensor Acquisition System for Marine Geotechnical Investigation","authors":"Seung-Beom Ku;Hyungjin Jung;Hyungjin Cho;Jiseok Oh;Jang-Un Kim;JunA Lee;Sungjun Cho;Jongmuk Won;Junghee Park;Hyunwook Choo;Hyung-Min Lee","doi":"10.1109/JSEN.2025.3628740","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628740","url":null,"abstract":"This article proposes a real-time errorcompensated multisensor acquisition system for a self-weight multiphysics cone penetration apparatus that performs marine geotechnical investigation. Conventional methods such as standard penetration test (SPT) and cone penetration test (CPT) provide reliable, high-resolution data but require dedicated offshore vessels, which are expensive to operate. To address these limitations, the apparatus with the proposed acquisition system has been developed for a lightweight and cost-effective solution. The proposed acquisition system drives hydro-compensated dual pressure transducers, strain gauges with Wheatstone bridges, and an inertial measurement unit (IMU) to obtain accurate geotechnical parameters as well as determine soil strength and stiffness properties during dynamic penetration. Additionally, the acquisition system uses an RS-485 communication protocol to transmit data over long distances up to 1.2 km at a data rate up to 100 kb/s. A 10.7 V lithium-ion (Li-ion) battery powers the proposed system, generating supply voltages of 9, 5, and 2 V through onboard voltage regulators to drive analog and digital subsystems. The proposed apparatus was verified to acquire reliable geotechnical parameters through field tests, providing a viable solution for offshore wind power development and submarine cable installations.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44951-44961"},"PeriodicalIF":4.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Locating and tracking targets in indoor environments is a challenging field of research. The complexity and variability of the environment limit the suitability of many technologies for this application. In this context, mmWave frequency modulated continuous wave (FMCW) radars can prove to be valuable sensors when combined with deep learning (DL) techniques, in order to extend performance in target locating and tracking. This article presents an original approach to locate and track moving targets in indoor environments, based on a YOLOv3 DL network that can be applied to radar data. To quantify the performance of the proposed method, here named mmTracking, tests were designed in accordance with the ISO/IEC 18305:2016 reference standard. The results show a mean error in localization of 0.39 m with a variance of 0.01 m2, and a root mean square error (RMSE) in the tracking of 0.40 m.
{"title":"mmTracking: A DL-Based mmWave RADAR Data Processing Algorithm for Indoor People Tracking","authors":"Michela Raimondi;Gianluca Ciattaglia;Antonio Nocera;Maria Gardano;Linda Senigagliesi;Susanna Spinsante;Ennio Gambi","doi":"10.1109/JSEN.2025.3628185","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3628185","url":null,"abstract":"Locating and tracking targets in indoor environments is a challenging field of research. The complexity and variability of the environment limit the suitability of many technologies for this application. In this context, mmWave frequency modulated continuous wave (FMCW) radars can prove to be valuable sensors when combined with deep learning (DL) techniques, in order to extend performance in target locating and tracking. This article presents an original approach to locate and track moving targets in indoor environments, based on a YOLOv3 DL network that can be applied to radar data. To quantify the performance of the proposed method, here named mmTracking, tests were designed in accordance with the ISO/IEC 18305:2016 reference standard. The results show a mean error in localization of 0.39 m with a variance of 0.01 m2, and a root mean square error (RMSE) in the tracking of 0.40 m.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"45071-45083"},"PeriodicalIF":4.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}