Marieke Geerars, Natasja C Wouda, Richard A W Felius, Johanna M A Visser-Meily, Martijn F Pisters, Michiel Punt
Balance impairments in stroke rehabilitation are commonly assessed using the Trunk Control Test (TCT), Berg Balance Scale (BBS), and Mini Balance Evaluation System Test (Mini-BESTest). However, these conventional tests are subjective, susceptible to floor and ceiling effects, and time-intensive. Inertial measurement units (IMUs) may address these limitations by providing objective, impairment-level metrics, not captured by conventional tests. This observational study explored the measurement properties of an IMU-based balance assessment of postural sway, and compared them with conventional tests in routine stroke rehabilitation. Stroke survivors from five Dutch rehabilitation centers were assessed at admission and discharge using conventional and IMU-based balance tests during sitting and standing tasks. Floor and ceiling effects were evaluated, and relationships between measures were examined using correlation analysis. At admission, 105 participants were measured, and 90 at discharge. IMU measures showed no floor or ceiling effects despite skewed distributions. IMU stance-tasks correlated moderately with the BBS and Mini-BESTest (18-29% variance explained), whereas IMU sitting-tasks showed weak to no relationship with the TCT. IMU-based balance assessment of postural sway captures balance-related information that is partially different from conventional tests. Although IMUs offer practical advantages, further research is needed to establish the clinical relevance of postural sway measurements alongside conventional tests.
{"title":"Advancing Balance Assessment in Stroke Rehabilitation: A Comparative Exploration of Sensor-Based and Conventional Balance Tests.","authors":"Marieke Geerars, Natasja C Wouda, Richard A W Felius, Johanna M A Visser-Meily, Martijn F Pisters, Michiel Punt","doi":"10.3390/s26041308","DOIUrl":"https://doi.org/10.3390/s26041308","url":null,"abstract":"<p><p>Balance impairments in stroke rehabilitation are commonly assessed using the Trunk Control Test (TCT), Berg Balance Scale (BBS), and Mini Balance Evaluation System Test (Mini-BESTest). However, these conventional tests are subjective, susceptible to floor and ceiling effects, and time-intensive. Inertial measurement units (IMUs) may address these limitations by providing objective, impairment-level metrics, not captured by conventional tests. This observational study explored the measurement properties of an IMU-based balance assessment of postural sway, and compared them with conventional tests in routine stroke rehabilitation. Stroke survivors from five Dutch rehabilitation centers were assessed at admission and discharge using conventional and IMU-based balance tests during sitting and standing tasks. Floor and ceiling effects were evaluated, and relationships between measures were examined using correlation analysis. At admission, 105 participants were measured, and 90 at discharge. IMU measures showed no floor or ceiling effects despite skewed distributions. IMU stance-tasks correlated moderately with the BBS and Mini-BESTest (18-29% variance explained), whereas IMU sitting-tasks showed weak to no relationship with the TCT. IMU-based balance assessment of postural sway captures balance-related information that is partially different from conventional tests. Although IMUs offer practical advantages, further research is needed to establish the clinical relevance of postural sway measurements alongside conventional tests.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linjie Dong, Renfei Zhang, Zikang Shao, Ziqiu Bian, Xingsong Wang
Rebar binding is a labor-intensive and low-efficiency process in the production of reinforced concrete prefabricated components, in which consistent binding quality is difficult to guarantee. To address the engineering challenges faced by rebar binding robots in complex construction environments-particularly in terms of binding-point recognition accuracy, real-time performance, and manipulator path planning efficiency-this paper presents an integrated method for binding-point recognition, localization, and binding path planning tailored to rebar binding tasks. First, based on the YOLOv8n-pose architecture, a lightweight rebar binding-point recognition and localization model, termed YOLOv8n-pose-Binding, is developed by introducing multi-scale Ghost convolution structures and an adaptive threshold focal loss. The proposed model improves keypoint detection accuracy and real-time performance while effectively reducing computational complexity, making it suitable for deployment on resource-constrained mobile robotic platforms. Second, a dedicated target coordinate system for rebar binding points is constructed to enable accurate pose estimation in the manipulator base frame. Furthermore, considering the non-uniform obstacle distribution in rebar mesh environments and the high-dimensional motion characteristics of robotic manipulators, systematic improvements are introduced to the RRT-Connect framework from the perspectives of sampling strategies, tree expansion, node reconnection, and path pruning, resulting in an improved RRT-Connect path planning algorithm. Simulation and experimental results demonstrate that, while maintaining favorable real-time performance, the proposed method achieves stable improvements in recognition accuracy and inference efficiency compared with the baseline YOLOv8n-pose model. In addition, the improved RRT-Connect algorithm exhibits superior engineering performance in terms of path planning efficiency and path quality, providing a deployable technical solution for automated rebar binding operations.
{"title":"Binding Point Recognition and Localization and Manipulator Binding Path Planning for a Rebar Binding Robot.","authors":"Linjie Dong, Renfei Zhang, Zikang Shao, Ziqiu Bian, Xingsong Wang","doi":"10.3390/s26041315","DOIUrl":"https://doi.org/10.3390/s26041315","url":null,"abstract":"<p><p>Rebar binding is a labor-intensive and low-efficiency process in the production of reinforced concrete prefabricated components, in which consistent binding quality is difficult to guarantee. To address the engineering challenges faced by rebar binding robots in complex construction environments-particularly in terms of binding-point recognition accuracy, real-time performance, and manipulator path planning efficiency-this paper presents an integrated method for binding-point recognition, localization, and binding path planning tailored to rebar binding tasks. First, based on the YOLOv8n-pose architecture, a lightweight rebar binding-point recognition and localization model, termed YOLOv8n-pose-Binding, is developed by introducing multi-scale Ghost convolution structures and an adaptive threshold focal loss. The proposed model improves keypoint detection accuracy and real-time performance while effectively reducing computational complexity, making it suitable for deployment on resource-constrained mobile robotic platforms. Second, a dedicated target coordinate system for rebar binding points is constructed to enable accurate pose estimation in the manipulator base frame. Furthermore, considering the non-uniform obstacle distribution in rebar mesh environments and the high-dimensional motion characteristics of robotic manipulators, systematic improvements are introduced to the RRT-Connect framework from the perspectives of sampling strategies, tree expansion, node reconnection, and path pruning, resulting in an improved RRT-Connect path planning algorithm. Simulation and experimental results demonstrate that, while maintaining favorable real-time performance, the proposed method achieves stable improvements in recognition accuracy and inference efficiency compared with the baseline YOLOv8n-pose model. In addition, the improved RRT-Connect algorithm exhibits superior engineering performance in terms of path planning efficiency and path quality, providing a deployable technical solution for automated rebar binding operations.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The underground sewage pipeline is one of the lifeline projects of the city. The pipeline temperature is one of the important influencing factors for the safe operation of the underground sewage pipeline. This study is based on the sewage pipeline project on Jianning Road in Nanjing; the sewage pipeline temperature monitoring experiment was conducted first. The optical frequency domain reflectometer (OFDR) technology was used to monitor the sewage pipeline temperature. The numerical simulation method was also incorporated to study the variations in sewage pipeline temperature. The optical fiber monitoring data for the underground sewage pipeline temperature were collected, and the spatiotemporal distribution of the underground sewage pipeline temperature was explored. The results show that the underground sewage pipeline temperature is continuously rising, and the rate of increase is slow. The maximum temperature change is 0.55 °C. The numerical simulation results are consistent with the trend of the measured results. The findings will provide a valuable reference for further research on sewage pipeline temperature.
{"title":"Study on Underground Sewage Pipeline Temperature Based on OFDR Technology and Numerical Simulation Methods.","authors":"Lei Gao, Xinyu Wu, Zhuodi Zheng, Mengran Guo","doi":"10.3390/s26041316","DOIUrl":"https://doi.org/10.3390/s26041316","url":null,"abstract":"<p><p>The underground sewage pipeline is one of the lifeline projects of the city. The pipeline temperature is one of the important influencing factors for the safe operation of the underground sewage pipeline. This study is based on the sewage pipeline project on Jianning Road in Nanjing; the sewage pipeline temperature monitoring experiment was conducted first. The optical frequency domain reflectometer (OFDR) technology was used to monitor the sewage pipeline temperature. The numerical simulation method was also incorporated to study the variations in sewage pipeline temperature. The optical fiber monitoring data for the underground sewage pipeline temperature were collected, and the spatiotemporal distribution of the underground sewage pipeline temperature was explored. The results show that the underground sewage pipeline temperature is continuously rising, and the rate of increase is slow. The maximum temperature change is 0.55 °C. The numerical simulation results are consistent with the trend of the measured results. The findings will provide a valuable reference for further research on sewage pipeline temperature.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Underwater images are frequently degraded by wavelength-dependent absorption and scattering, which introduce strong color casts, reduce contrast, and obscure fine structures. Although learning-based enhancement methods have recently improved perceptual quality, many remain computationally intensive, limiting deployment on resource-constrained underwater platforms. To address this challenge, we propose LCS-Net, a lightweight framework for single underwater image enhancement that targets a favorable quality-efficiency trade-off. LCS-Net first applies a dynamic Learnable Color Correction Module (LCCM) that predicts image-specific correction parameters from global color statistics, enabling low-overhead cast compensation and stabilizing the input distribution. Feature extraction is conducted using efficient inverted residual blocks equipped with squeeze-and-excitation (SE) to recalibrate channel responses and facilitate detail recovery under scattering-induced degradation. At the bottleneck, a Selective Multi-Scale Dilated Block (SMSDB) aggregates complementary context via parallel dilated convolutions and global cues and adaptively reweights the fused features to handle diverse water conditions. Extensive experiments on public benchmarks demonstrate that LCS-Net achieves competitive performance, yielding a PSNR of 26.46 dB and an SSIM of 0.92 on UIEB, along with 28.71 dB and 0.86 on EUVP, while maintaining a compact model size and low computational cost, highlighting its potential for practical deployment.
{"title":"LCS-Net: Learnable Color Correction and Selective Multi-Scale Fusion for Underwater Image Enhancement.","authors":"Gang Li, Xiangfei Zhao","doi":"10.3390/s26041323","DOIUrl":"https://doi.org/10.3390/s26041323","url":null,"abstract":"<p><p>Underwater images are frequently degraded by wavelength-dependent absorption and scattering, which introduce strong color casts, reduce contrast, and obscure fine structures. Although learning-based enhancement methods have recently improved perceptual quality, many remain computationally intensive, limiting deployment on resource-constrained underwater platforms. To address this challenge, we propose LCS-Net, a lightweight framework for single underwater image enhancement that targets a favorable quality-efficiency trade-off. LCS-Net first applies a dynamic Learnable Color Correction Module (LCCM) that predicts image-specific correction parameters from global color statistics, enabling low-overhead cast compensation and stabilizing the input distribution. Feature extraction is conducted using efficient inverted residual blocks equipped with squeeze-and-excitation (SE) to recalibrate channel responses and facilitate detail recovery under scattering-induced degradation. At the bottleneck, a Selective Multi-Scale Dilated Block (SMSDB) aggregates complementary context via parallel dilated convolutions and global cues and adaptively reweights the fused features to handle diverse water conditions. Extensive experiments on public benchmarks demonstrate that LCS-Net achieves competitive performance, yielding a PSNR of 26.46 dB and an SSIM of 0.92 on UIEB, along with 28.71 dB and 0.86 on EUVP, while maintaining a compact model size and low computational cost, highlighting its potential for practical deployment.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Metin Bicer, James Pope, Lynn Rochester, Silvia Del Din, Lisa Alcock
Human activity recognition (HAR) lies at the core of digital healthcare applications that monitor different types of physical activity. Traditional HAR methods often struggle to adapt to variable-length, real-world activity data and to generalise across cohorts (e.g., from young to old cohorts). Thus, the aim of this study was to investigate HAR using wearable sensor data, with a particular focus on cross-cohort evaluation. Each dataset included two accelerometers (right thigh and lower back) sampling at 50 Hz, capturing a range of daily-life activities that were annotated using video recordings from chest-mounted cameras synchronised with the accelerometers. Neural networks were trained on young cohorts' data and tested on old cohorts' data. The effects of network architecture, sampling frequency and sensor location on classification performance were investigated. Network performance was evaluated using accuracy, recall, precision, F1-score and confusion matrices. The gated recurrent unit architecture achieved the best performance when trained solely on young cohorts' data, with weighted F1-score of 0.95 ± 0.05 and 0.93 ± 0.05 for young and old cohorts, respectively, resulting in a highly generalizable method. Classification performance across multiple sampling frequencies was comparable. The thigh-mounted sensor consistently achieved higher performance than the lower back sensor across activities except lying. Furthermore, combining datasets significantly improved performance on the old cohort (weighted F1-score: 0.97 ± 0.02) due to increased variability in the training data. This study highlights the importance of network architecture and dataset composition in HAR and demonstrates the potential of neural networks for robust, real-world activity recognition across age-defined cohorts, specifically between young and old cohorts.
{"title":"Neural Network-Based Granular Activity Recognition from Accelerometers: Assessing Generalizability Across Diverse Mobility Profiles.","authors":"Metin Bicer, James Pope, Lynn Rochester, Silvia Del Din, Lisa Alcock","doi":"10.3390/s26041320","DOIUrl":"https://doi.org/10.3390/s26041320","url":null,"abstract":"<p><p>Human activity recognition (HAR) lies at the core of digital healthcare applications that monitor different types of physical activity. Traditional HAR methods often struggle to adapt to variable-length, real-world activity data and to generalise across cohorts (e.g., from young to old cohorts). Thus, the aim of this study was to investigate HAR using wearable sensor data, with a particular focus on cross-cohort evaluation. Each dataset included two accelerometers (right thigh and lower back) sampling at 50 Hz, capturing a range of daily-life activities that were annotated using video recordings from chest-mounted cameras synchronised with the accelerometers. Neural networks were trained on young cohorts' data and tested on old cohorts' data. The effects of network architecture, sampling frequency and sensor location on classification performance were investigated. Network performance was evaluated using accuracy, recall, precision, F1-score and confusion matrices. The gated recurrent unit architecture achieved the best performance when trained solely on young cohorts' data, with weighted F1-score of 0.95 ± 0.05 and 0.93 ± 0.05 for young and old cohorts, respectively, resulting in a highly generalizable method. Classification performance across multiple sampling frequencies was comparable. The thigh-mounted sensor consistently achieved higher performance than the lower back sensor across activities except lying. Furthermore, combining datasets significantly improved performance on the old cohort (weighted F1-score: 0.97 ± 0.02) due to increased variability in the training data. This study highlights the importance of network architecture and dataset composition in HAR and demonstrates the potential of neural networks for robust, real-world activity recognition across age-defined cohorts, specifically between young and old cohorts.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ang Bian, Wei Wang, Andreas Nienkötter, Baofeng Di, Tian Deng, Yi Luo, Peng Chen, Xi Li
Study of early Chinese ceramics is crucial for understanding cultural, economic, and technological developments in Chinese history. With the evolving deep learning techniques, one urgent question would be, whether we can identify early Chinese ceramics by a simple 2D image without further domain knowledge. This work collected a highly diverse dataset for ancient Chinese ceramics from 15 dynasties, with 4 representative glaze colors and 15 shape types. We studied the performance of five state-of-the-art neural networks on two identification tasks: ceramic visual feature recognition and early Chinese ceramic dating. A class-imbalance learning strategy is designed to improve the models' performance on multi-label tasks. To the best of our knowledge, our work is the first to introduce deep learning into early Chinese ceramic recognition on a large scale. Experiments prove that deep learning can recognize visual features like glaze and most shape types with high accuracy, while ceramic dating is feasible for the main dynasties but remains challenging along the overall history. Further quantitative assessment shows that cultural inheritance and artistic continuity can lead to reasonable false dating by classifying ceramics into adjacent dynasties or periods. Moreover, although domain knowledge is required for interpretation, deep learning shows great potential in recognizing even unlabeled time-relevant features, which can help study the inheritance and evolution of early Chinese ceramic development.
{"title":"Can Deep Learning Identify Early Chinese Ceramics Using Only 2D Images?","authors":"Ang Bian, Wei Wang, Andreas Nienkötter, Baofeng Di, Tian Deng, Yi Luo, Peng Chen, Xi Li","doi":"10.3390/s26041312","DOIUrl":"https://doi.org/10.3390/s26041312","url":null,"abstract":"<p><p>Study of early Chinese ceramics is crucial for understanding cultural, economic, and technological developments in Chinese history. With the evolving deep learning techniques, one urgent question would be, whether we can identify early Chinese ceramics by a simple 2D image without further domain knowledge. This work collected a highly diverse dataset for ancient Chinese ceramics from 15 dynasties, with 4 representative glaze colors and 15 shape types. We studied the performance of five state-of-the-art neural networks on two identification tasks: ceramic visual feature recognition and early Chinese ceramic dating. A class-imbalance learning strategy is designed to improve the models' performance on multi-label tasks. To the best of our knowledge, our work is the first to introduce deep learning into early Chinese ceramic recognition on a large scale. Experiments prove that deep learning can recognize visual features like glaze and most shape types with high accuracy, while ceramic dating is feasible for the main dynasties but remains challenging along the overall history. Further quantitative assessment shows that cultural inheritance and artistic continuity can lead to reasonable false dating by classifying ceramics into adjacent dynasties or periods. Moreover, although domain knowledge is required for interpretation, deep learning shows great potential in recognizing even unlabeled time-relevant features, which can help study the inheritance and evolution of early Chinese ceramic development.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In communication-centric integrated sensing and communication (ISAC) systems, passive radars exploit existing communication signals of opportunity for sensing. To compute delay-Doppler or range-velocity maps (DDMs and RVMs, respectively), modern orthogonal frequency division multiplexing (OFDM)-based sensing systems use the channel frequency response (CFR) originally estimated in communication receivers for equalization. In OFDM-based passive radars utilizing 4G LTE or 5G NR waveforms, CFR estimation typically relies only on reference signals. However, simulation-based studies that assume a priori knowledge of user data symbols indicate potential performance gains when incorporating user data and other downlink channels. In this work, we present an experimental evaluation of an OFDM-based passive radar that jointly utilizes all commonly present components of the 5G NR downlink waveform: synchronization signals (PSS and SSS), broadcast and control channels (PBCHs and PDCCHs, respectively), data channels (PDSCHs), and reference signals (PBCH DM-RSs, PDCCH DM-RSs, PDSCH DM-RSs, and CSI-RSs). Our results show that utilizing user data from fully occupied 5G downlink signals, under the assumption of full knowledge of PDSCH locations, significantly improves both the probability of detection (POD) and the peak height, measured by the peak-to-noise-floor ratio (PNFR), compared with pilot-only sensing. Since perfect knowledge of the user data payload is not assumed, we estimate the transmission bit error rate (BER) and analyze its impact on sensing performance. Finally, we investigate more realistic scenarios in which only a subset of PDSCH resource element locations is known, as in practical 5G deployments, and evaluate how partial data location knowledge affects the POD and PNFR under different BER conditions.
{"title":"Experimental Evaluation of 5G NR OFDM-Based Passive Radar Exploiting Reference, Control, and User Data.","authors":"Marek Wypich, Tomasz P Zielinski","doi":"10.3390/s26041317","DOIUrl":"https://doi.org/10.3390/s26041317","url":null,"abstract":"<p><p>In communication-centric integrated sensing and communication (ISAC) systems, passive radars exploit existing communication signals of opportunity for sensing. To compute delay-Doppler or range-velocity maps (DDMs and RVMs, respectively), modern orthogonal frequency division multiplexing (OFDM)-based sensing systems use the channel frequency response (CFR) originally estimated in communication receivers for equalization. In OFDM-based passive radars utilizing 4G LTE or 5G NR waveforms, CFR estimation typically relies only on reference signals. However, simulation-based studies that assume a priori knowledge of user data symbols indicate potential performance gains when incorporating user data and other downlink channels. In this work, we present an experimental evaluation of an OFDM-based passive radar that jointly utilizes all commonly present components of the 5G NR downlink waveform: synchronization signals (PSS and SSS), broadcast and control channels (PBCHs and PDCCHs, respectively), data channels (PDSCHs), and reference signals (PBCH DM-RSs, PDCCH DM-RSs, PDSCH DM-RSs, and CSI-RSs). Our results show that utilizing user data from fully occupied 5G downlink signals, under the assumption of full knowledge of PDSCH locations, significantly improves both the probability of detection (POD) and the peak height, measured by the peak-to-noise-floor ratio (PNFR), compared with pilot-only sensing. Since perfect knowledge of the user data payload is not assumed, we estimate the transmission bit error rate (BER) and analyze its impact on sensing performance. Finally, we investigate more realistic scenarios in which only a subset of PDSCH resource element locations is known, as in practical 5G deployments, and evaluate how partial data location knowledge affects the POD and PNFR under different BER conditions.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geolocalization of images captured by unmanned aerial vehicles (UAVs) remains a significant challenge in Global Navigation Satellite System-denied environments. Although geolocalization is typically achieved by matching UAV images with satellite images, the viewpoint discrepancy between oblique UAV and nadir satellite images complicates this task. In this study, we employ 3D Gaussian Splatting (3DGS) to generate images from viewpoints close to the satellite viewpoint based on multiview UAV images. Assuming that the approximate flight area of the UAV is known, we propose a geolocalization method that directly establishes correspondences between 3DGS-rendered and satellite images using pixel-level image matching. These satellite images, which we refer to as wide-area satellite images, cover a larger area than the UAV observation range. Experimental results demonstrate that the proposed method achieves higher geolocalization accuracy than existing approaches that divide wide-area satellite images and perform image retrieval. Moreover, we demonstrate the potential for geographically consistent integration of independently captured and trained 3DGS models by leveraging the correspondences between 3DGS-rendered and wide-area satellite images.
{"title":"Geolocalization of Unmanned Aerial Vehicle Images and Mapping onto Satellite Images Utilizing 3D Gaussian Splatting.","authors":"Satoshi Arakawa, Kaiyu Suzuki, Tomofumi Matsuzawa","doi":"10.3390/s26041322","DOIUrl":"https://doi.org/10.3390/s26041322","url":null,"abstract":"<p><p>Geolocalization of images captured by unmanned aerial vehicles (UAVs) remains a significant challenge in Global Navigation Satellite System-denied environments. Although geolocalization is typically achieved by matching UAV images with satellite images, the viewpoint discrepancy between oblique UAV and nadir satellite images complicates this task. In this study, we employ 3D Gaussian Splatting (3DGS) to generate images from viewpoints close to the satellite viewpoint based on multiview UAV images. Assuming that the approximate flight area of the UAV is known, we propose a geolocalization method that directly establishes correspondences between 3DGS-rendered and satellite images using pixel-level image matching. These satellite images, which we refer to as wide-area satellite images, cover a larger area than the UAV observation range. Experimental results demonstrate that the proposed method achieves higher geolocalization accuracy than existing approaches that divide wide-area satellite images and perform image retrieval. Moreover, we demonstrate the potential for geographically consistent integration of independently captured and trained 3DGS models by leveraging the correspondences between 3DGS-rendered and wide-area satellite images.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jayroop Ramesh, Tom Loney, Stefan Du Plessis, Homero Rivas, Assim Sagahyroon, Fadi Aloul, Thomas Boillat
Low-cost, smartphone-based thermal cameras offer unprecedented accessibility for physiological monitoring, yet their validity and reliability for absolute skin temperature measurement in clinical settings remain contentious. This study aims to quantify the agreement and repeatability of a widely used smartphone thermal camera, the FLIR One Pro, against a consumer-grade, non-contact infrared thermometer, the iHealth PT3. A method comparison study was conducted with 40 healthy adult participants, yielding a total of 2400 temperature measurements. Skin temperature of the hand dorsum was measured concurrently with the FLIR One Pro and the iHealth PT3. The protocol involved two rounds: Round 1 (R1) in a stable, static environment to assess baseline repeatability, and Round 2 (R2) in a dynamic environment mimicking clinical repositioning. The performance of the instruments was compared using paired t-tests for mean differences and Bland-Altman analysis for assessing agreement. The iHealth PT3 demonstrated superior precision, with an average intra-participant standard deviation (SD) of 0.030 °C in R1 and 0.092 °C in R2. In stark contrast, the FLIR One Pro exhibited significantly higher variability, with an average SD of 0.34 °C in R1 and 0.30 °C in R2. Bland-Altman analysis revealed a substantial mean bias of -1.42 °C in R1 and -1.15 °C, with critically wide 95% limits of agreement ranges of ≈6 °C. The substantial systematic bias and poor agreement of the FLIR One Pro far exceed both its manufacturer-stated accuracy and clinically acceptable error margins for absolute temperature measurement. To further examine whether calibration could mitigate these deficiencies, we applied a suite of ten machine learning regressors to map FLIR readings onto iHealth PT3 values. Calibration reduced systematic bias across all models, with Quantile Gradient-Boosted Regression Trees achieving the lowest MAE (1.162 °C). The Extra Trees model yielded the lowest RMSE (1.792 °C) and the highest explained variance (R2 = 0.152), yet this relatively low value confirms that the device's high intrinsic variability limits the effectiveness of algorithmic correction. As such the device has limited utility for longitudinal patient monitoring or for diagnostic decisions that rely on precise, absolute temperature thresholds. These findings inform medical practitioners in low-resource settings of the profound limitations of using this device as a standalone clinical thermometer and emphasize that algorithmic correction cannot compensate for fundamental hardware and measurement noise constraints.
低成本、基于智能手机的热像仪为生理监测提供了前所未有的可及性,但它们在临床环境中绝对皮肤温度测量的有效性和可靠性仍然存在争议。本研究旨在量化广泛使用的智能手机热像仪FLIR One Pro与消费级非接触式红外温度计iHealth PT3的一致性和可重复性。对40名健康成人参与者进行了方法比较研究,共进行了2400次温度测量。使用FLIR One Pro和iHealth PT3同时测量手背皮肤温度。该方案包括两轮:第1轮(R1)在稳定的静态环境中评估基线的可重复性,第2轮(R2)在模拟临床重新定位的动态环境中。仪器的性能比较使用配对t检验平均差异和Bland-Altman分析评估一致性。iHealth PT3显示出卓越的精度,R1和R2的平均参与者内标准差(SD)分别为0.030°C和0.092°C。与之形成鲜明对比的是,FLIR One Pro表现出明显更高的变异性,R1和R2的平均SD分别为0.34°C和0.30°C。Bland-Altman分析显示,R1和-1.15°C的平均偏差为-1.42°C,一致性范围的95%临界值为≈6°C。FLIR One Pro的大量系统偏差和差一致性远远超过其制造商声明的精度和临床可接受的绝对温度测量误差范围。为了进一步研究校准是否可以减轻这些缺陷,我们应用了一套10个机器学习回归器,将FLIR读数映射到iHealth PT3值。校准降低了所有模型的系统偏差,分位数梯度增强回归树获得了最低的MAE(1.162°C)。Extra Trees模型产生了最低的RMSE(1.792°C)和最高的解释方差(R2 = 0.152),但这个相对较低的值证实了该设备的高内在可变性限制了算法校正的有效性。因此,该设备对患者的纵向监测或依赖精确的绝对温度阈值的诊断决策的效用有限。这些发现告知在低资源环境下的医疗从业者使用该设备作为独立临床温度计的深刻局限性,并强调算法校正不能补偿基本硬件和测量噪声的限制。
{"title":"Machine Learning Calibration of Smartphone-Based Infrared Thermal Cameras: Improved Bias and Persistent Random Error.","authors":"Jayroop Ramesh, Tom Loney, Stefan Du Plessis, Homero Rivas, Assim Sagahyroon, Fadi Aloul, Thomas Boillat","doi":"10.3390/s26041295","DOIUrl":"https://doi.org/10.3390/s26041295","url":null,"abstract":"<p><p>Low-cost, smartphone-based thermal cameras offer unprecedented accessibility for physiological monitoring, yet their validity and reliability for absolute skin temperature measurement in clinical settings remain contentious. This study aims to quantify the agreement and repeatability of a widely used smartphone thermal camera, the FLIR One Pro, against a consumer-grade, non-contact infrared thermometer, the iHealth PT3. A method comparison study was conducted with 40 healthy adult participants, yielding a total of 2400 temperature measurements. Skin temperature of the hand dorsum was measured concurrently with the FLIR One Pro and the iHealth PT3. The protocol involved two rounds: Round 1 (R1) in a stable, static environment to assess baseline repeatability, and Round 2 (R2) in a dynamic environment mimicking clinical repositioning. The performance of the instruments was compared using paired <i>t</i>-tests for mean differences and Bland-Altman analysis for assessing agreement. The iHealth PT3 demonstrated superior precision, with an average intra-participant standard deviation (SD) of 0.030 °C in R1 and 0.092 °C in R2. In stark contrast, the FLIR One Pro exhibited significantly higher variability, with an average SD of 0.34 °C in R1 and 0.30 °C in R2. Bland-Altman analysis revealed a substantial mean bias of -1.42 °C in R1 and -1.15 °C, with critically wide 95% limits of agreement ranges of ≈6 °C. The substantial systematic bias and poor agreement of the FLIR One Pro far exceed both its manufacturer-stated accuracy and clinically acceptable error margins for absolute temperature measurement. To further examine whether calibration could mitigate these deficiencies, we applied a suite of ten machine learning regressors to map FLIR readings onto iHealth PT3 values. Calibration reduced systematic bias across all models, with Quantile Gradient-Boosted Regression Trees achieving the lowest MAE (1.162 °C). The Extra Trees model yielded the lowest RMSE (1.792 °C) and the highest explained variance (R2 = 0.152), yet this relatively low value confirms that the device's high intrinsic variability limits the effectiveness of algorithmic correction. As such the device has limited utility for longitudinal patient monitoring or for diagnostic decisions that rely on precise, absolute temperature thresholds. These findings inform medical practitioners in low-resource settings of the profound limitations of using this device as a standalone clinical thermometer and emphasize that algorithmic correction cannot compensate for fundamental hardware and measurement noise constraints.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiří Pihrt, Petr Šimánek, Miroslav Čepek, Karel Charvát, Alexander Kovalenko, Šárka Horáková, Michal Kepka
Accurate field-scale meteorological information is required for precision agriculture, but operational numerical weather prediction products remain spatially coarse and cannot resolve local microclimate variability. This study proposes a data fusion superresolution workflow that combines global GFS predictors (0.25°), regional station observations from Southern Moravia (Czech Republic), and static physiographic descriptors (elevation and terrain gradients) to predict the 2 m air temperature 24 h ahead and to generate spatially continuous high-resolution temperature fields. Several model families (LightGBM, TabPFN, Transformer, and Bayesian neural fields) are evaluated under spatiotemporal splits designed to test generalization to unseen time periods and unseen stations; spatial mapping is implemented via a KNN interpolation layer in the physiographic feature space. All learned configurations reduce the mean absolute error relative to raw GFS across splits. In the most operationally relevant regime (unseen stations and unseen future period), TabPFN-KNN achieves the lowest MAE (1.26 °C), corresponding to an ≈24% reduction versus GFS (1.66 °C). The results support the feasibility of an operational, sensor-infrastructure-compatible pipeline for high-resolution temperature superresolution in agricultural landscapes.
{"title":"AI-Driven Weather Data Superresolution via Data Fusion for Precision Agriculture.","authors":"Jiří Pihrt, Petr Šimánek, Miroslav Čepek, Karel Charvát, Alexander Kovalenko, Šárka Horáková, Michal Kepka","doi":"10.3390/s26041297","DOIUrl":"https://doi.org/10.3390/s26041297","url":null,"abstract":"<p><p>Accurate field-scale meteorological information is required for precision agriculture, but operational numerical weather prediction products remain spatially coarse and cannot resolve local microclimate variability. This study proposes a data fusion superresolution workflow that combines global GFS predictors (0.25°), regional station observations from Southern Moravia (Czech Republic), and static physiographic descriptors (elevation and terrain gradients) to predict the 2 m air temperature 24 h ahead and to generate spatially continuous high-resolution temperature fields. Several model families (LightGBM, TabPFN, Transformer, and Bayesian neural fields) are evaluated under spatiotemporal splits designed to test generalization to unseen time periods and unseen stations; spatial mapping is implemented via a KNN interpolation layer in the physiographic feature space. All learned configurations reduce the mean absolute error relative to raw GFS across splits. In the most operationally relevant regime (unseen stations and unseen future period), TabPFN-KNN achieves the lowest MAE (1.26 °C), corresponding to an ≈24% reduction versus GFS (1.66 °C). The results support the feasibility of an operational, sensor-infrastructure-compatible pipeline for high-resolution temperature superresolution in agricultural landscapes.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 4","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147309588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}