Hasan Tolga Ünal, Mertcan Koçak, Sebahat Yaprak Çetin, Özgün Kaya Kara, Mert Doğan
This study evaluated the test-retest reliability of a depth sensor-based Fukuda Stepping Test and examined associations between sensor-derived kinematic parameters and established clinical outcomes in older adults. Eighty-six community-dwelling older adults (mean age 70.3 ± 4.7 years) performed an eyes-closed stepping task monitored by a Microsoft Kinect v2 sensor. Clinical assessments included the Berg Balance Scale, Timed Up and Go test, Five Times Sit-to-Stand, Montreal Cognitive Assessment, International Physical Activity Questionnaire, and WHOQOL-OLD. Test-retest reliability was assessed using intraclass correlation coefficients in a randomly selected subgroup. Reliability estimates varied across parameters, with temporal and displacement-based measures demonstrating more consistent agreement across sessions, whereas selected angular variables showed greater variability. Correlation analyses identified statistically significant associations between trunk kinematic changes and clinical measures, with effect sizes generally ranging from weak to moderate magnitude. Upper trunk rotation was associated with functional mobility measures, while traditional displacement-based metrics demonstrated limited clinical relationships. These findings support the feasibility of markerless depth-sensing technology for objective quantification of movement during the Fukuda Stepping Test and highlight the potential contribution of segmental kinematic parameters to multidimensional functional assessment in older adults.
{"title":"Depth Sensor-Based Instrumentation of the Fukuda Stepping Test: Reliability and Clinical Associations in Older Adults.","authors":"Hasan Tolga Ünal, Mertcan Koçak, Sebahat Yaprak Çetin, Özgün Kaya Kara, Mert Doğan","doi":"10.3390/s26051623","DOIUrl":"10.3390/s26051623","url":null,"abstract":"<p><p>This study evaluated the test-retest reliability of a depth sensor-based Fukuda Stepping Test and examined associations between sensor-derived kinematic parameters and established clinical outcomes in older adults. Eighty-six community-dwelling older adults (mean age 70.3 ± 4.7 years) performed an eyes-closed stepping task monitored by a Microsoft Kinect v2 sensor. Clinical assessments included the Berg Balance Scale, Timed Up and Go test, Five Times Sit-to-Stand, Montreal Cognitive Assessment, International Physical Activity Questionnaire, and WHOQOL-OLD. Test-retest reliability was assessed using intraclass correlation coefficients in a randomly selected subgroup. Reliability estimates varied across parameters, with temporal and displacement-based measures demonstrating more consistent agreement across sessions, whereas selected angular variables showed greater variability. Correlation analyses identified statistically significant associations between trunk kinematic changes and clinical measures, with effect sizes generally ranging from weak to moderate magnitude. Upper trunk rotation was associated with functional mobility measures, while traditional displacement-based metrics demonstrated limited clinical relationships. These findings support the feasibility of markerless depth-sensing technology for objective quantification of movement during the Fukuda Stepping Test and highlight the potential contribution of segmental kinematic parameters to multidimensional functional assessment in older adults.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Yang, Qianxi Zhang, Junjie Bao, Xue Wang, Nengchao Wu, Qing Tao, Haijia Wu, Li Liu
Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent space to filter instrumentation noise. Second, for defect classification, a Class-Specific Weighted Ensemble (CSWE) tackles extreme class imbalance by aggressively penalizing majority-class bias, improving defect interception recall by 7.72%. Third, for transient performance tracking, an Adaptive Regime-Switching Regression (ARSR) replaces manual phase selection with unsupervised regime routing to dynamically weight local experts, reducing relative prediction error by 12%. Rigorously validated across three diverse public datasets (NASA C-MAPSS, AI4I, SECOM) and a physical H4 engine assembly line, the framework achieves an ultra-low inference latency of 80±3 ms, practically reducing the engine rework rate by 7.2%.
{"title":"Predicting Car-Engine Manufacturing Quality with Multi-Sensor Data of Manufacturing Assembly Process.","authors":"Xinyu Yang, Qianxi Zhang, Junjie Bao, Xue Wang, Nengchao Wu, Qing Tao, Haijia Wu, Li Liu","doi":"10.3390/s26051651","DOIUrl":"10.3390/s26051651","url":null,"abstract":"<p><p>Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent space to filter instrumentation noise. Second, for defect classification, a Class-Specific Weighted Ensemble (CSWE) tackles extreme class imbalance by aggressively penalizing majority-class bias, improving defect interception recall by 7.72%. Third, for transient performance tracking, an Adaptive Regime-Switching Regression (ARSR) replaces manual phase selection with unsupervised regime routing to dynamically weight local experts, reducing relative prediction error by 12%. Rigorously validated across three diverse public datasets (NASA C-MAPSS, AI4I, SECOM) and a physical H4 engine assembly line, the framework achieves an ultra-low inference latency of 80±3 ms, practically reducing the engine rework rate by 7.2%.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147460030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gyu-Yeon Kim, Somi Park, Sunkyung Lee, Bobin Seo, Seon-Han Choi, Sung-Min Park
Real-world traffic is highly dynamic, with pedestrians exhibiting unpredictable movements. Pedestrians' poses are essential cues for predicting their actions, enabling vehicles to respond proactively and reduce accident risks. In autonomous driving, the distance between vehicles and pedestrians is critical, making 3D human pose estimation crucial. In this context, pedestrian pose estimation has been actively studied, and recently, light detection and ranging (LiDAR) sensors have attracted attention due to their accurate 3D depth information and privacy benefits. However, existing LiDAR-based 3D pose estimation methods mainly process 3D data directly, requiring high computational cost and memory. In this paper, we propose a lightweight LiDAR-based 3D human pose estimation method specifically designed for deployment in autonomous driving systems. Unlike conventional 3D direct processing methods, our approach strategically reduces computational complexity by projecting point clouds into 2D depth images and leveraging a lightweight MoveNet, followed by efficient 3D lifting. Furthermore, we introduce a self-occlusion correction algorithm to improve robustness under side-view and bending poses, where depth-based projections often suffer from distortion. Experimental results on benchmark datasets demonstrate that the proposed method achieves competitive pose estimation accuracy while substantially improving efficiency, highlighting its practicality and scalability for real-time autonomous vehicle applications.
{"title":"Lightweight LiDAR-Based 3D Human Pose Estimation via 2D Depth Images for Autonomous Driving.","authors":"Gyu-Yeon Kim, Somi Park, Sunkyung Lee, Bobin Seo, Seon-Han Choi, Sung-Min Park","doi":"10.3390/s26051631","DOIUrl":"10.3390/s26051631","url":null,"abstract":"<p><p>Real-world traffic is highly dynamic, with pedestrians exhibiting unpredictable movements. Pedestrians' poses are essential cues for predicting their actions, enabling vehicles to respond proactively and reduce accident risks. In autonomous driving, the distance between vehicles and pedestrians is critical, making 3D human pose estimation crucial. In this context, pedestrian pose estimation has been actively studied, and recently, light detection and ranging (LiDAR) sensors have attracted attention due to their accurate 3D depth information and privacy benefits. However, existing LiDAR-based 3D pose estimation methods mainly process 3D data directly, requiring high computational cost and memory. In this paper, we propose a lightweight LiDAR-based 3D human pose estimation method specifically designed for deployment in autonomous driving systems. Unlike conventional 3D direct processing methods, our approach strategically reduces computational complexity by projecting point clouds into 2D depth images and leveraging a lightweight MoveNet, followed by efficient 3D lifting. Furthermore, we introduce a self-occlusion correction algorithm to improve robustness under side-view and bending poses, where depth-based projections often suffer from distortion. Experimental results on benchmark datasets demonstrate that the proposed method achieves competitive pose estimation accuracy while substantially improving efficiency, highlighting its practicality and scalability for real-time autonomous vehicle applications.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the proliferation of Internet of Things (IoT) deployments and mobile sensing systems, reversible data hiding in encrypted images (RDHEI) has emerged as a cornerstone technology for secure cloud-based sensor data management. RDHEI ensures data confidentiality while enabling bit-to-bit restoration of original visual assets. However, conventional RDHEI methods often struggle to optimize the trade-off between high embedding capacity (EC) and the fidelity requirements of sensor-acquired content. This paper proposes an advanced RDHEI framework based on Adaptive Predicted Value Computation and Pixel Classification (APVCPC). The core contribution is a context-aware prediction engine that adaptively selects optimal estimation functions based on local texture complexity, significantly enhancing prediction accuracy in heterogeneous image regions. Subsequently, a content-driven pixel classification paradigm categorizes pixels into loadable (Lpxls) and non-loadable (NLpxls) sets using a dynamic threshold, maximizing the utilization of spatial redundancy. The proposed scheme further supports separable data extraction and image decryption, providing flexible access control for diverse user privileges in secure sensing scenarios. Experimental results on standard benchmarks and the BOW-2 database demonstrate that APVCPC achieves a superior average embedding rate exceeding 2.0 bpp and ensures perfect reversibility, significantly outperforming state-of-the-art techniques in terms of both capacity and security.
{"title":"APVCPC: An Adaptive Predicted Value Computation and Pixel Classification Framework for Reversible Data Hiding in Encrypted Images.","authors":"Yaomin Wang, Wenguang He, Gangqiang Xiong, Yuyun Chen","doi":"10.3390/s26051636","DOIUrl":"10.3390/s26051636","url":null,"abstract":"<p><p>With the proliferation of Internet of Things (IoT) deployments and mobile sensing systems, reversible data hiding in encrypted images (RDHEI) has emerged as a cornerstone technology for secure cloud-based sensor data management. RDHEI ensures data confidentiality while enabling bit-to-bit restoration of original visual assets. However, conventional RDHEI methods often struggle to optimize the trade-off between high embedding capacity (EC) and the fidelity requirements of sensor-acquired content. This paper proposes an advanced RDHEI framework based on Adaptive Predicted Value Computation and Pixel Classification (APVCPC). The core contribution is a context-aware prediction engine that adaptively selects optimal estimation functions based on local texture complexity, significantly enhancing prediction accuracy in heterogeneous image regions. Subsequently, a content-driven pixel classification paradigm categorizes pixels into loadable (Lpxls) and non-loadable (NLpxls) sets using a dynamic threshold, maximizing the utilization of spatial redundancy. The proposed scheme further supports separable data extraction and image decryption, providing flexible access control for diverse user privileges in secure sensing scenarios. Experimental results on standard benchmarks and the BOW-2 database demonstrate that APVCPC achieves a superior average embedding rate exceeding 2.0 bpp and ensures perfect reversibility, significantly outperforming state-of-the-art techniques in terms of both capacity and security.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A Khawaja, Abdulmajeed M Alenezi
A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time.
{"title":"An Intelligent Real-Time System for Sentence-Level Recognition of Continuous Saudi Sign Language Using Landmark-Based Temporal Modeling.","authors":"Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A Khawaja, Abdulmajeed M Alenezi","doi":"10.3390/s26051652","DOIUrl":"10.3390/s26051652","url":null,"abstract":"<p><p>A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuhui Zhang, Yuxi Liu, Yixin Yan, Jiabin Li, Lei Xu
Finger vein recognition has emerged as a highly robust and intrinsically stable biometric technology, demonstrating great potential in identity authentication and intelligent security applications. However, conventional methods still suffer from constraints in feature representation and computational efficiency, particularly under challenging conditions such as illumination variation, pose deviation, and noise interference. To address these challenges, this study presents an efficient finger vein recognition approach based on a lightweight convolutional neural network (LCNN) architecture. The proposed framework integrates a multi-stage image preprocessing pipeline for automatic vein region detection, advanced denoising, and refined texture enhancement, which is subsequently followed by compact feature modeling within a lightweight deep network. Extensive experiments on the public Shandong University Machine Learning and Applications-Homologous Multi-Modal Traits (SDUMLA-HMT) dataset and a self-acquired Laboratory Finger-Vein (Lab-Vein) dataset validate the superiority of the proposed method, achieving recognition accuracies of 97.1% and 98.3%, respectively, surpassing existing benchmark models. Moreover, the model demonstrates notable reductions in parameter complexity and computational cost, achieving an average inference time of only 12.6 ms, which confirms its strong real-time capability and suitability for embedded deployment. Overall, the proposed approach attains a desirable trade-off between accuracy and efficiency, offering meaningful implications for the advancement of lightweight biometric recognition systems.
{"title":"An Efficient Finger Vein Recognition Method Based on Improved Lightweight MobileNet.","authors":"Xuhui Zhang, Yuxi Liu, Yixin Yan, Jiabin Li, Lei Xu","doi":"10.3390/s26051634","DOIUrl":"10.3390/s26051634","url":null,"abstract":"<p><p>Finger vein recognition has emerged as a highly robust and intrinsically stable biometric technology, demonstrating great potential in identity authentication and intelligent security applications. However, conventional methods still suffer from constraints in feature representation and computational efficiency, particularly under challenging conditions such as illumination variation, pose deviation, and noise interference. To address these challenges, this study presents an efficient finger vein recognition approach based on a lightweight convolutional neural network (LCNN) architecture. The proposed framework integrates a multi-stage image preprocessing pipeline for automatic vein region detection, advanced denoising, and refined texture enhancement, which is subsequently followed by compact feature modeling within a lightweight deep network. Extensive experiments on the public Shandong University Machine Learning and Applications-Homologous Multi-Modal Traits (SDUMLA-HMT) dataset and a self-acquired Laboratory Finger-Vein (Lab-Vein) dataset validate the superiority of the proposed method, achieving recognition accuracies of 97.1% and 98.3%, respectively, surpassing existing benchmark models. Moreover, the model demonstrates notable reductions in parameter complexity and computational cost, achieving an average inference time of only 12.6 ms, which confirms its strong real-time capability and suitability for embedded deployment. Overall, the proposed approach attains a desirable trade-off between accuracy and efficiency, offering meaningful implications for the advancement of lightweight biometric recognition systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vehicular communication networks demand highly efficient and accurate channel estimation to ensure reliable data exchange in high mobility scenarios. The IEEE 802.11p standard is widely regarded as the foundation of the Vehicle-to-Vehicle (V2V) communication channel; however, it is constrained by limited pilot resources and a fixed pilot structure, which degrade the performance and effectiveness of traditional estimation techniques, particularly in dynamic environments. Recent advances in deep learning offer significant potential for addressing these issues by improving estimation accuracy and modelling complex channel dynamics. Though deep learning-based methods introduce trade-offs in computational complexity and accuracy, these are crucial constraints in latency-sensitive V2V scenarios. This article presents a comprehensive review of deep learning-based channel estimation techniques, analysing methods for the IEEE 802.11p standard and critically examining their limitations in both classical and deep learning-based approaches. Additionally, the article highlights improvements introduced by IEEE 802.11bd, which features an enhanced pilot structure and advanced modulation schemes, providing a more robust framework for adaptive, efficient channel estimation. By identifying future research pathways that balance delay, complexity, and accuracy, an intelligent and effective transportation system can be established.
{"title":"Deep Learning-Based Channel Estimation Techniques Using IEEE 802.11p Protocol, Limitations of IEEE 802.11p and Future Directions of IEEE 802.11bd: A Review.","authors":"Saveeta Bai, Jeff Kilby, Krishnamachar Prasad","doi":"10.3390/s26051658","DOIUrl":"10.3390/s26051658","url":null,"abstract":"<p><p>Vehicular communication networks demand highly efficient and accurate channel estimation to ensure reliable data exchange in high mobility scenarios. The IEEE 802.11p standard is widely regarded as the foundation of the Vehicle-to-Vehicle (V2V) communication channel; however, it is constrained by limited pilot resources and a fixed pilot structure, which degrade the performance and effectiveness of traditional estimation techniques, particularly in dynamic environments. Recent advances in deep learning offer significant potential for addressing these issues by improving estimation accuracy and modelling complex channel dynamics. Though deep learning-based methods introduce trade-offs in computational complexity and accuracy, these are crucial constraints in latency-sensitive V2V scenarios. This article presents a comprehensive review of deep learning-based channel estimation techniques, analysing methods for the IEEE 802.11p standard and critically examining their limitations in both classical and deep learning-based approaches. Additionally, the article highlights improvements introduced by IEEE 802.11bd, which features an enhanced pilot structure and advanced modulation schemes, providing a more robust framework for adaptive, efficient channel estimation. By identifying future research pathways that balance delay, complexity, and accuracy, an intelligent and effective transportation system can be established.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) employs a 532 nm laser with strong water-penetration capability, making it well suited for satellite-derived bathymetry in shallow waters; however, the effective denoising of photon-counting data remains essential due to strong solar background and intrinsic instrument noise. To address this challenge, this study proposes a novel photon denoising method, termed the Directional Nearest Neighbor Distance-based Algorithm (DNNDA), for robust extraction of signal photons from shallow-water ICESat-2 data. Unlike existing methods that rely heavily on density or terrain features and often degrade under high-noise conditions, DNNDA systematically exploits both scale-corrected spatial relationships and directional distribution characteristics of photons. By quantitatively characterizing the directional features of photon distributions and embedding this information into a density representation, DNNDA amplifies the density contrast between signal and noise photons, rendering the seafloor signal photons more distinct and easier to extract. An evaluation index was further designed to automate optimal parameter determination. Validation using multiple global ICESat-2 datasets demonstrates that DNNDA achieves superior seafloor photon extraction performance, with F1-scores exceeding 95%. Further regression analysis against high-precision CUDEM data in the Puerto Rico region yields root-mean-square errors below 0.57 m. By jointly correcting scale anisotropy and incorporating directional information, DNNDA enables reliable and adaptive signal photon extraction across local and global scales, providing a robust solution for shallow-water bathymetry in complex, high-noise environments.
{"title":"A Directional Nearest Neighbor Distance-Based Algorithm for Signal Photon Extraction from Spaceborne Photon-Counting LiDAR in Shallow Waters.","authors":"Shibin Zhao, Zhenwei Shi, Tingting Jin, Boxue Huang, Xiaokai Li, Hui Long","doi":"10.3390/s26051645","DOIUrl":"10.3390/s26051645","url":null,"abstract":"<p><p>The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) employs a 532 nm laser with strong water-penetration capability, making it well suited for satellite-derived bathymetry in shallow waters; however, the effective denoising of photon-counting data remains essential due to strong solar background and intrinsic instrument noise. To address this challenge, this study proposes a novel photon denoising method, termed the Directional Nearest Neighbor Distance-based Algorithm (DNNDA), for robust extraction of signal photons from shallow-water ICESat-2 data. Unlike existing methods that rely heavily on density or terrain features and often degrade under high-noise conditions, DNNDA systematically exploits both scale-corrected spatial relationships and directional distribution characteristics of photons. By quantitatively characterizing the directional features of photon distributions and embedding this information into a density representation, DNNDA amplifies the density contrast between signal and noise photons, rendering the seafloor signal photons more distinct and easier to extract. An evaluation index was further designed to automate optimal parameter determination. Validation using multiple global ICESat-2 datasets demonstrates that DNNDA achieves superior seafloor photon extraction performance, with F1-scores exceeding 95%. Further regression analysis against high-precision CUDEM data in the Puerto Rico region yields root-mean-square errors below 0.57 m. By jointly correcting scale anisotropy and incorporating directional information, DNNDA enables reliable and adaptive signal photon extraction across local and global scales, providing a robust solution for shallow-water bathymetry in complex, high-noise environments.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristina Skroce, Lauren V Turner, Andrea Zignoli, David J Lipman, Howard C Zisser, Michael C Riddell
Glucose data regarding extreme elite performances in athletes without diabetes remains limited. The purpose is to characterize continuous glucose monitoring (CGM) responses in elite athletes across distinct high-performance contexts. This descriptive case series includes three separate elite athletes who used a CGM during their respective sporting events. The first is an ultra-endurance relay cycling world-record performance (Race Across the West, RAW), the second is a continuous high-intensity Everesting Challenge cycling record attempt, and the third is a maximal constant-weight no-fins breath-hold depth dive performed in international competition. Glycemic outcomes, as measured by CGM, included mean, maximum, and minimum glucose, glucose standard deviation (SD), and the percentage of time in tight glucose range (TITR: 70-140 mg/dL; 3.9-7.8 mmol/L), time below range (TBR: <70 mg/dL; <3.9 mmol/L), and time above range (TAR140: >140 mg/dL; >7.8 mmol/L). Other performance data, including peak power, heart rate, and lactate, are also provided where available. During the RAW challenge lasting 44 h and 20 min, mean glucose was 91 ± 23.2 mg/dL (mean ± SD) with 9.15% TBR and 35.58% TITR during cycling and 115 ± 24.7 mg/dL with 9.11% TBR and 43.16% TITR during resting periods. In contrast, the Everesting Challenge cycling record attempt demonstrated a persistently elevated glucose profile (160 ± 5.7 mg/dL), minimal variability (CV 3.5%), and 100% TAR140. Following the maximal breath-hold depth dive, interstitial glucose was 100% TAR140 during recovery (187 ± 18.5 mg/dL), alongside marked elevations in blood lactate concentrations (peak 13.4 mmol/L). The series of case studies demonstrate that substantial deviations from traditional euglycemic ranges are common during elite performance in athletes without diabetes. Interpretation of CGM data in athletic settings should therefore be performance- and context-specific rather than based on clinical glycemic thresholds.
{"title":"Beyond Euglycemia: Case Studies Using Continuous Glucose Monitoring in Elite Athletes Without Diabetes During Record Athletic Events.","authors":"Kristina Skroce, Lauren V Turner, Andrea Zignoli, David J Lipman, Howard C Zisser, Michael C Riddell","doi":"10.3390/s26051624","DOIUrl":"10.3390/s26051624","url":null,"abstract":"<p><p>Glucose data regarding extreme elite performances in athletes without diabetes remains limited. The purpose is to characterize continuous glucose monitoring (CGM) responses in elite athletes across distinct high-performance contexts. This descriptive case series includes three separate elite athletes who used a CGM during their respective sporting events. The first is an ultra-endurance relay cycling world-record performance (Race Across the West, RAW), the second is a continuous high-intensity Everesting Challenge cycling record attempt, and the third is a maximal constant-weight no-fins breath-hold depth dive performed in international competition. Glycemic outcomes, as measured by CGM, included mean, maximum, and minimum glucose, glucose standard deviation (SD), and the percentage of time in tight glucose range (TITR: 70-140 mg/dL; 3.9-7.8 mmol/L), time below range (TBR: <70 mg/dL; <3.9 mmol/L), and time above range (TAR140: >140 mg/dL; >7.8 mmol/L). Other performance data, including peak power, heart rate, and lactate, are also provided where available. During the RAW challenge lasting 44 h and 20 min, mean glucose was 91 ± 23.2 mg/dL (mean ± SD) with 9.15% TBR and 35.58% TITR during cycling and 115 ± 24.7 mg/dL with 9.11% TBR and 43.16% TITR during resting periods. In contrast, the Everesting Challenge cycling record attempt demonstrated a persistently elevated glucose profile (160 ± 5.7 mg/dL), minimal variability (CV 3.5%), and 100% TAR140. Following the maximal breath-hold depth dive, interstitial glucose was 100% TAR140 during recovery (187 ± 18.5 mg/dL), alongside marked elevations in blood lactate concentrations (peak 13.4 mmol/L). The series of case studies demonstrate that substantial deviations from traditional euglycemic ranges are common during elite performance in athletes without diabetes. Interpretation of CGM data in athletic settings should therefore be performance- and context-specific rather than based on clinical glycemic thresholds.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azimuth multi-channel synthetic aperture radar (SAR) is a core technology for achieving high-resolution wide-swath (HRWS) imaging. However, inter-channel phase inconsistency causes image amplitude distortion and phase accuracy degradation, which severely affects subsequent applications. Existing phase error estimation methods face specific limitations: the performance of subspace-based approaches degrades in complex scenes due to unreliable covariance matrix estimation, while conventional frequency-domain correlation methods rely on manual selection of strong scatterers, introducing inefficiency and subjectivity that precludes autonomous deployment. To address these issues, this paper proposes a geometry-driven inter-channel phase error estimation framework based on Global Radar Landmark Control Point Library (GRL-CP). The proposed framework replaces scene-dependent target selection with geometric-prior-driven control point activation. The GRL-CP library stores only the geodetic coordinates and scattering stability attributes of globally persistent radar landmarks, rather than image patches. For a new SAR acquisition, the echo position of these landmarks are predicted using a range-Doppler geometric model, enabling fully automatic and reliable control point activation. Based on the activated radar landmarks, inter-channel phase error is estimated using a frequency-domain correlation scheme. Experimental results on multi-channel spaceborne SAR datasets demonstrate that the proposed method achieves improved stability and accuracy under complex terrain scenarios.
{"title":"Geometry-Driven Phase Error Estimation for Azimuth Multi-Channel SAR via Global Radar Landmark Control Point Library.","authors":"Tingting Jin, Zheng Li, Feng Wang, Hui Long","doi":"10.3390/s26051622","DOIUrl":"10.3390/s26051622","url":null,"abstract":"<p><p>Azimuth multi-channel synthetic aperture radar (SAR) is a core technology for achieving high-resolution wide-swath (HRWS) imaging. However, inter-channel phase inconsistency causes image amplitude distortion and phase accuracy degradation, which severely affects subsequent applications. Existing phase error estimation methods face specific limitations: the performance of subspace-based approaches degrades in complex scenes due to unreliable covariance matrix estimation, while conventional frequency-domain correlation methods rely on manual selection of strong scatterers, introducing inefficiency and subjectivity that precludes autonomous deployment. To address these issues, this paper proposes a geometry-driven inter-channel phase error estimation framework based on Global Radar Landmark Control Point Library (GRL-CP). The proposed framework replaces scene-dependent target selection with geometric-prior-driven control point activation. The GRL-CP library stores only the geodetic coordinates and scattering stability attributes of globally persistent radar landmarks, rather than image patches. For a new SAR acquisition, the echo position of these landmarks are predicted using a range-Doppler geometric model, enabling fully automatic and reliable control point activation. Based on the activated radar landmarks, inter-channel phase error is estimated using a frequency-domain correlation scheme. Experimental results on multi-channel spaceborne SAR datasets demonstrate that the proposed method achieves improved stability and accuracy under complex terrain scenarios.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}