首页 > 最新文献

Sensors最新文献

英文 中文
Depth Sensor-Based Instrumentation of the Fukuda Stepping Test: Reliability and Clinical Associations in Older Adults. 基于深度传感器的福田步进检验仪器:老年人的可靠性和临床关联。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051623
Hasan Tolga Ünal, Mertcan Koçak, Sebahat Yaprak Çetin, Özgün Kaya Kara, Mert Doğan

This study evaluated the test-retest reliability of a depth sensor-based Fukuda Stepping Test and examined associations between sensor-derived kinematic parameters and established clinical outcomes in older adults. Eighty-six community-dwelling older adults (mean age 70.3 ± 4.7 years) performed an eyes-closed stepping task monitored by a Microsoft Kinect v2 sensor. Clinical assessments included the Berg Balance Scale, Timed Up and Go test, Five Times Sit-to-Stand, Montreal Cognitive Assessment, International Physical Activity Questionnaire, and WHOQOL-OLD. Test-retest reliability was assessed using intraclass correlation coefficients in a randomly selected subgroup. Reliability estimates varied across parameters, with temporal and displacement-based measures demonstrating more consistent agreement across sessions, whereas selected angular variables showed greater variability. Correlation analyses identified statistically significant associations between trunk kinematic changes and clinical measures, with effect sizes generally ranging from weak to moderate magnitude. Upper trunk rotation was associated with functional mobility measures, while traditional displacement-based metrics demonstrated limited clinical relationships. These findings support the feasibility of markerless depth-sensing technology for objective quantification of movement during the Fukuda Stepping Test and highlight the potential contribution of segmental kinematic parameters to multidimensional functional assessment in older adults.

本研究评估了基于深度传感器的Fukuda步进测试的重测可靠性,并检查了传感器衍生的运动学参数与老年人既定临床结果之间的关系。86名居住在社区的老年人(平均年龄70.3±4.7岁)在微软Kinect v2传感器的监测下完成闭眼步走任务。临床评估包括Berg平衡量表、定时起身测试、五次坐立测试、蒙特利尔认知评估、国际体育活动问卷和WHOQOL-OLD。在随机选择的亚组中使用类内相关系数评估重测信度。可靠性估计因参数的不同而不同,时间和基于位移的测量显示出更一致的一致性,而选择的角度变量显示出更大的可变性。相关性分析确定了躯干运动学变化与临床测量之间的统计学显著关联,效应大小通常从弱到中等大小不等。上躯干旋转与功能性活动指标相关,而传统的基于位移的指标显示有限的临床关系。这些发现支持了无标记深度传感技术在福田步进测试中客观量化运动的可行性,并强调了节段运动学参数对老年人多维功能评估的潜在贡献。
{"title":"Depth Sensor-Based Instrumentation of the Fukuda Stepping Test: Reliability and Clinical Associations in Older Adults.","authors":"Hasan Tolga Ünal, Mertcan Koçak, Sebahat Yaprak Çetin, Özgün Kaya Kara, Mert Doğan","doi":"10.3390/s26051623","DOIUrl":"10.3390/s26051623","url":null,"abstract":"<p><p>This study evaluated the test-retest reliability of a depth sensor-based Fukuda Stepping Test and examined associations between sensor-derived kinematic parameters and established clinical outcomes in older adults. Eighty-six community-dwelling older adults (mean age 70.3 ± 4.7 years) performed an eyes-closed stepping task monitored by a Microsoft Kinect v2 sensor. Clinical assessments included the Berg Balance Scale, Timed Up and Go test, Five Times Sit-to-Stand, Montreal Cognitive Assessment, International Physical Activity Questionnaire, and WHOQOL-OLD. Test-retest reliability was assessed using intraclass correlation coefficients in a randomly selected subgroup. Reliability estimates varied across parameters, with temporal and displacement-based measures demonstrating more consistent agreement across sessions, whereas selected angular variables showed greater variability. Correlation analyses identified statistically significant associations between trunk kinematic changes and clinical measures, with effect sizes generally ranging from weak to moderate magnitude. Upper trunk rotation was associated with functional mobility measures, while traditional displacement-based metrics demonstrated limited clinical relationships. These findings support the feasibility of markerless depth-sensing technology for objective quantification of movement during the Fukuda Stepping Test and highlight the potential contribution of segmental kinematic parameters to multidimensional functional assessment in older adults.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Car-Engine Manufacturing Quality with Multi-Sensor Data of Manufacturing Assembly Process. 基于制造装配过程多传感器数据的汽车发动机制造质量预测。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051651
Xinyu Yang, Qianxi Zhang, Junjie Bao, Xue Wang, Nengchao Wu, Qing Tao, Haijia Wu, Li Liu

Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent space to filter instrumentation noise. Second, for defect classification, a Class-Specific Weighted Ensemble (CSWE) tackles extreme class imbalance by aggressively penalizing majority-class bias, improving defect interception recall by 7.72%. Third, for transient performance tracking, an Adaptive Regime-Switching Regression (ARSR) replaces manual phase selection with unsupervised regime routing to dynamically weight local experts, reducing relative prediction error by 12%. Rigorously validated across three diverse public datasets (NASA C-MAPSS, AI4I, SECOM) and a physical H4 engine assembly line, the framework achieves an ultra-low inference latency of 80±3 ms, practically reducing the engine rework rate by 7.2%.

汽车发动机的质量控制从根本上受到高维、噪声和不平衡的多传感器数据的阻碍。为了克服这些挑战,本文提出了一种边缘可部署的诊断和预测框架。首先,稀疏自编码器(SAE)将超过12,000个分布式制造参数映射到鲁棒潜在空间中以过滤仪器噪声。其次,对于缺陷分类,类特定加权集成(CSWE)通过积极地惩罚多数类偏见来解决极端的类不平衡,将缺陷拦截召回率提高了7.72%。第三,对于瞬态性能跟踪,自适应状态切换回归(ARSR)用无监督状态路由取代手动相位选择,以动态加权本地专家,将相对预测误差降低12%。在三个不同的公共数据集(NASA C-MAPSS, AI4I, SECOM)和物理H4发动机装配线上进行了严格验证,该框架实现了80±3 ms的超低推理延迟,实际上将发动机返工率降低了7.2%。
{"title":"Predicting Car-Engine Manufacturing Quality with Multi-Sensor Data of Manufacturing Assembly Process.","authors":"Xinyu Yang, Qianxi Zhang, Junjie Bao, Xue Wang, Nengchao Wu, Qing Tao, Haijia Wu, Li Liu","doi":"10.3390/s26051651","DOIUrl":"10.3390/s26051651","url":null,"abstract":"<p><p>Car engine quality control is fundamentally hindered by extremely high-dimensional, noisy, and imbalanced multi-sensor data. To overcome these challenges, this paper proposes an edge-deployable diagnostic and predictive framework. First, a Sparse Autoencoder (SAE) maps over 12,000 distributed manufacturing parameters into a robust latent space to filter instrumentation noise. Second, for defect classification, a Class-Specific Weighted Ensemble (CSWE) tackles extreme class imbalance by aggressively penalizing majority-class bias, improving defect interception recall by 7.72%. Third, for transient performance tracking, an Adaptive Regime-Switching Regression (ARSR) replaces manual phase selection with unsupervised regime routing to dynamically weight local experts, reducing relative prediction error by 12%. Rigorously validated across three diverse public datasets (NASA C-MAPSS, AI4I, SECOM) and a physical H4 engine assembly line, the framework achieves an ultra-low inference latency of 80±3 ms, practically reducing the engine rework rate by 7.2%.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147460030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight LiDAR-Based 3D Human Pose Estimation via 2D Depth Images for Autonomous Driving. 基于2D深度图像的轻型激光雷达的自动驾驶三维人体姿态估计。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051631
Gyu-Yeon Kim, Somi Park, Sunkyung Lee, Bobin Seo, Seon-Han Choi, Sung-Min Park

Real-world traffic is highly dynamic, with pedestrians exhibiting unpredictable movements. Pedestrians' poses are essential cues for predicting their actions, enabling vehicles to respond proactively and reduce accident risks. In autonomous driving, the distance between vehicles and pedestrians is critical, making 3D human pose estimation crucial. In this context, pedestrian pose estimation has been actively studied, and recently, light detection and ranging (LiDAR) sensors have attracted attention due to their accurate 3D depth information and privacy benefits. However, existing LiDAR-based 3D pose estimation methods mainly process 3D data directly, requiring high computational cost and memory. In this paper, we propose a lightweight LiDAR-based 3D human pose estimation method specifically designed for deployment in autonomous driving systems. Unlike conventional 3D direct processing methods, our approach strategically reduces computational complexity by projecting point clouds into 2D depth images and leveraging a lightweight MoveNet, followed by efficient 3D lifting. Furthermore, we introduce a self-occlusion correction algorithm to improve robustness under side-view and bending poses, where depth-based projections often suffer from distortion. Experimental results on benchmark datasets demonstrate that the proposed method achieves competitive pose estimation accuracy while substantially improving efficiency, highlighting its practicality and scalability for real-time autonomous vehicle applications.

现实世界的交通是高度动态的,行人的动作不可预测。行人的姿势是预测其行为的重要线索,使车辆能够主动做出反应,降低事故风险。在自动驾驶中,车辆与行人之间的距离至关重要,因此3D人体姿态估计至关重要。在此背景下,行人姿态估计得到了积极的研究,最近,光探测和测距(LiDAR)传感器因其准确的3D深度信息和隐私优势而受到关注。然而,现有的基于lidar的三维姿态估计方法主要是直接处理三维数据,计算成本和存储空间都很高。在本文中,我们提出了一种轻量级的基于激光雷达的3D人体姿态估计方法,专门设计用于自动驾驶系统的部署。与传统的3D直接处理方法不同,我们的方法通过将点云投影到2D深度图像中,并利用轻量级的MoveNet,然后进行高效的3D提升,战略性地降低了计算复杂性。此外,我们引入了一种自遮挡校正算法,以提高侧视和弯曲姿态下的鲁棒性,其中基于深度的投影经常遭受失真。在基准数据集上的实验结果表明,该方法在具有竞争力的姿态估计精度的同时显著提高了效率,突出了其在实时自动驾驶汽车应用中的实用性和可扩展性。
{"title":"Lightweight LiDAR-Based 3D Human Pose Estimation via 2D Depth Images for Autonomous Driving.","authors":"Gyu-Yeon Kim, Somi Park, Sunkyung Lee, Bobin Seo, Seon-Han Choi, Sung-Min Park","doi":"10.3390/s26051631","DOIUrl":"10.3390/s26051631","url":null,"abstract":"<p><p>Real-world traffic is highly dynamic, with pedestrians exhibiting unpredictable movements. Pedestrians' poses are essential cues for predicting their actions, enabling vehicles to respond proactively and reduce accident risks. In autonomous driving, the distance between vehicles and pedestrians is critical, making 3D human pose estimation crucial. In this context, pedestrian pose estimation has been actively studied, and recently, light detection and ranging (LiDAR) sensors have attracted attention due to their accurate 3D depth information and privacy benefits. However, existing LiDAR-based 3D pose estimation methods mainly process 3D data directly, requiring high computational cost and memory. In this paper, we propose a lightweight LiDAR-based 3D human pose estimation method specifically designed for deployment in autonomous driving systems. Unlike conventional 3D direct processing methods, our approach strategically reduces computational complexity by projecting point clouds into 2D depth images and leveraging a lightweight MoveNet, followed by efficient 3D lifting. Furthermore, we introduce a self-occlusion correction algorithm to improve robustness under side-view and bending poses, where depth-based projections often suffer from distortion. Experimental results on benchmark datasets demonstrate that the proposed method achieves competitive pose estimation accuracy while substantially improving efficiency, highlighting its practicality and scalability for real-time autonomous vehicle applications.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
APVCPC: An Adaptive Predicted Value Computation and Pixel Classification Framework for Reversible Data Hiding in Encrypted Images. 一种用于加密图像中可逆数据隐藏的自适应预测值计算和像素分类框架。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051636
Yaomin Wang, Wenguang He, Gangqiang Xiong, Yuyun Chen

With the proliferation of Internet of Things (IoT) deployments and mobile sensing systems, reversible data hiding in encrypted images (RDHEI) has emerged as a cornerstone technology for secure cloud-based sensor data management. RDHEI ensures data confidentiality while enabling bit-to-bit restoration of original visual assets. However, conventional RDHEI methods often struggle to optimize the trade-off between high embedding capacity (EC) and the fidelity requirements of sensor-acquired content. This paper proposes an advanced RDHEI framework based on Adaptive Predicted Value Computation and Pixel Classification (APVCPC). The core contribution is a context-aware prediction engine that adaptively selects optimal estimation functions based on local texture complexity, significantly enhancing prediction accuracy in heterogeneous image regions. Subsequently, a content-driven pixel classification paradigm categorizes pixels into loadable (Lpxls) and non-loadable (NLpxls) sets using a dynamic threshold, maximizing the utilization of spatial redundancy. The proposed scheme further supports separable data extraction and image decryption, providing flexible access control for diverse user privileges in secure sensing scenarios. Experimental results on standard benchmarks and the BOW-2 database demonstrate that APVCPC achieves a superior average embedding rate exceeding 2.0 bpp and ensures perfect reversibility, significantly outperforming state-of-the-art techniques in terms of both capacity and security.

随着物联网(IoT)部署和移动传感系统的普及,隐藏在加密图像中的可逆数据(RDHEI)已成为基于云的传感器数据安全管理的基石技术。RDHEI确保数据保密性,同时支持原始视觉资产的逐位恢复。然而,传统的RDHEI方法往往难以在高嵌入容量(EC)和传感器获取内容的保真度要求之间进行优化权衡。提出了一种基于自适应预测值计算和像素分类(APVCPC)的rdhi框架。核心贡献是上下文感知预测引擎,该引擎基于局部纹理复杂度自适应选择最优估计函数,显著提高了异构图像区域的预测精度。随后,内容驱动的像素分类范式使用动态阈值将像素分为可加载(Lpxls)和不可加载(NLpxls)集,最大限度地利用空间冗余。该方案进一步支持可分离的数据提取和图像解密,为安全感知场景中不同用户权限提供灵活的访问控制。在标准基准和BOW-2数据库上的实验结果表明,APVCPC实现了超过2.0 bpp的平均嵌入率,并确保了完美的可逆性,在容量和安全性方面都明显优于目前最先进的技术。
{"title":"APVCPC: An Adaptive Predicted Value Computation and Pixel Classification Framework for Reversible Data Hiding in Encrypted Images.","authors":"Yaomin Wang, Wenguang He, Gangqiang Xiong, Yuyun Chen","doi":"10.3390/s26051636","DOIUrl":"10.3390/s26051636","url":null,"abstract":"<p><p>With the proliferation of Internet of Things (IoT) deployments and mobile sensing systems, reversible data hiding in encrypted images (RDHEI) has emerged as a cornerstone technology for secure cloud-based sensor data management. RDHEI ensures data confidentiality while enabling bit-to-bit restoration of original visual assets. However, conventional RDHEI methods often struggle to optimize the trade-off between high embedding capacity (EC) and the fidelity requirements of sensor-acquired content. This paper proposes an advanced RDHEI framework based on Adaptive Predicted Value Computation and Pixel Classification (APVCPC). The core contribution is a context-aware prediction engine that adaptively selects optimal estimation functions based on local texture complexity, significantly enhancing prediction accuracy in heterogeneous image regions. Subsequently, a content-driven pixel classification paradigm categorizes pixels into loadable (Lpxls) and non-loadable (NLpxls) sets using a dynamic threshold, maximizing the utilization of spatial redundancy. The proposed scheme further supports separable data extraction and image decryption, providing flexible access control for diverse user privileges in secure sensing scenarios. Experimental results on standard benchmarks and the BOW-2 database demonstrate that APVCPC achieves a superior average embedding rate exceeding 2.0 bpp and ensures perfect reversibility, significantly outperforming state-of-the-art techniques in terms of both capacity and security.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986865/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent Real-Time System for Sentence-Level Recognition of Continuous Saudi Sign Language Using Landmark-Based Temporal Modeling. 基于地标时态建模的连续沙特手语句子级智能实时识别系统。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051652
Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A Khawaja, Abdulmajeed M Alenezi

A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time.

聋人和听力障碍者面临的一个长期挑战是手语使用者与听力群体之间的沟通差距,特别是在自动翻译资源有限的地区。在沙特阿拉伯,由于依赖沙特手语(SSL)和缺乏实时句子级翻译系统,这一差距进一步扩大。本文提出了一种实时的句子级连续SSL识别系统,并直接映射到自然阿拉伯语口语。该系统在实时视频流或预先录制的内容上端到端运行,使用MediaPipe整体框架提取时空地标特征。为了进行分类,输入特征向量由225个来自手部和身体姿势地标的特征组成。这些特征通过在ArabSign (ArSL)数据集上训练的双向长短期记忆(BiLSTM)网络进行处理,在50个连续阿拉伯手语句子的词汇表上执行直接的句子级分类,并由基于空闲的分割机制支持,从而实现自然的、不间断的签名。实验评估表明,在LOSO交叉验证协议下,该模型的平均句子级准确率为94.2%,优于固定的独立签名人分割基线的92.07%,同时保持了适合交互使用的实时性。为了提高语言流畅性,使用大型语言模型(LLM)将可选的识别后细化阶段纳入其中,然后进行文本到语音合成以产生可听的阿拉伯语输出;这种细化操作严格作为后处理,不包括在报告的识别精度指标。结果表明,直接句子级建模与基于地标的特征提取和实时分割相结合,为实时连续SSL句子识别提供了一种有效实用的解决方案。
{"title":"An Intelligent Real-Time System for Sentence-Level Recognition of Continuous Saudi Sign Language Using Landmark-Based Temporal Modeling.","authors":"Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A Khawaja, Abdulmajeed M Alenezi","doi":"10.3390/s26051652","DOIUrl":"10.3390/s26051652","url":null,"abstract":"<p><p>A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987092/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Finger Vein Recognition Method Based on Improved Lightweight MobileNet. 基于改进轻量级MobileNet的高效手指静脉识别方法。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051634
Xuhui Zhang, Yuxi Liu, Yixin Yan, Jiabin Li, Lei Xu

Finger vein recognition has emerged as a highly robust and intrinsically stable biometric technology, demonstrating great potential in identity authentication and intelligent security applications. However, conventional methods still suffer from constraints in feature representation and computational efficiency, particularly under challenging conditions such as illumination variation, pose deviation, and noise interference. To address these challenges, this study presents an efficient finger vein recognition approach based on a lightweight convolutional neural network (LCNN) architecture. The proposed framework integrates a multi-stage image preprocessing pipeline for automatic vein region detection, advanced denoising, and refined texture enhancement, which is subsequently followed by compact feature modeling within a lightweight deep network. Extensive experiments on the public Shandong University Machine Learning and Applications-Homologous Multi-Modal Traits (SDUMLA-HMT) dataset and a self-acquired Laboratory Finger-Vein (Lab-Vein) dataset validate the superiority of the proposed method, achieving recognition accuracies of 97.1% and 98.3%, respectively, surpassing existing benchmark models. Moreover, the model demonstrates notable reductions in parameter complexity and computational cost, achieving an average inference time of only 12.6 ms, which confirms its strong real-time capability and suitability for embedded deployment. Overall, the proposed approach attains a desirable trade-off between accuracy and efficiency, offering meaningful implications for the advancement of lightweight biometric recognition systems.

手指静脉识别作为一种鲁棒性强、本质稳定的生物识别技术,在身份认证和智能安全应用方面具有巨大的潜力。然而,传统方法仍然受到特征表示和计算效率的限制,特别是在光照变化、姿态偏差和噪声干扰等具有挑战性的条件下。为了解决这些挑战,本研究提出了一种基于轻量级卷积神经网络(LCNN)架构的高效手指静脉识别方法。该框架集成了用于自动静脉区域检测、高级去噪和精细纹理增强的多阶段图像预处理管道,随后在轻量级深度网络中进行紧凑的特征建模。在山东大学公开的机器学习与应用-同源多模态特征(SDUMLA-HMT)数据集和自主获取的实验室手指静脉(Lab-Vein)数据集上进行的大量实验验证了该方法的优越性,识别准确率分别达到97.1%和98.3%,超过了现有的基准模型。此外,该模型显著降低了参数复杂度和计算成本,平均推理时间仅为12.6 ms,实时性强,适合嵌入式部署。总的来说,提出的方法在准确性和效率之间实现了理想的权衡,为轻量级生物识别系统的发展提供了有意义的意义。
{"title":"An Efficient Finger Vein Recognition Method Based on Improved Lightweight MobileNet.","authors":"Xuhui Zhang, Yuxi Liu, Yixin Yan, Jiabin Li, Lei Xu","doi":"10.3390/s26051634","DOIUrl":"10.3390/s26051634","url":null,"abstract":"<p><p>Finger vein recognition has emerged as a highly robust and intrinsically stable biometric technology, demonstrating great potential in identity authentication and intelligent security applications. However, conventional methods still suffer from constraints in feature representation and computational efficiency, particularly under challenging conditions such as illumination variation, pose deviation, and noise interference. To address these challenges, this study presents an efficient finger vein recognition approach based on a lightweight convolutional neural network (LCNN) architecture. The proposed framework integrates a multi-stage image preprocessing pipeline for automatic vein region detection, advanced denoising, and refined texture enhancement, which is subsequently followed by compact feature modeling within a lightweight deep network. Extensive experiments on the public Shandong University Machine Learning and Applications-Homologous Multi-Modal Traits (SDUMLA-HMT) dataset and a self-acquired Laboratory Finger-Vein (Lab-Vein) dataset validate the superiority of the proposed method, achieving recognition accuracies of 97.1% and 98.3%, respectively, surpassing existing benchmark models. Moreover, the model demonstrates notable reductions in parameter complexity and computational cost, achieving an average inference time of only 12.6 ms, which confirms its strong real-time capability and suitability for embedded deployment. Overall, the proposed approach attains a desirable trade-off between accuracy and efficiency, offering meaningful implications for the advancement of lightweight biometric recognition systems.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986718/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Channel Estimation Techniques Using IEEE 802.11p Protocol, Limitations of IEEE 802.11p and Future Directions of IEEE 802.11bd: A Review. 基于IEEE 802.11p协议的深度学习信道估计技术、IEEE 802.11p的局限性和IEEE 802.11bd的未来发展方向综述
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051658
Saveeta Bai, Jeff Kilby, Krishnamachar Prasad

Vehicular communication networks demand highly efficient and accurate channel estimation to ensure reliable data exchange in high mobility scenarios. The IEEE 802.11p standard is widely regarded as the foundation of the Vehicle-to-Vehicle (V2V) communication channel; however, it is constrained by limited pilot resources and a fixed pilot structure, which degrade the performance and effectiveness of traditional estimation techniques, particularly in dynamic environments. Recent advances in deep learning offer significant potential for addressing these issues by improving estimation accuracy and modelling complex channel dynamics. Though deep learning-based methods introduce trade-offs in computational complexity and accuracy, these are crucial constraints in latency-sensitive V2V scenarios. This article presents a comprehensive review of deep learning-based channel estimation techniques, analysing methods for the IEEE 802.11p standard and critically examining their limitations in both classical and deep learning-based approaches. Additionally, the article highlights improvements introduced by IEEE 802.11bd, which features an enhanced pilot structure and advanced modulation schemes, providing a more robust framework for adaptive, efficient channel estimation. By identifying future research pathways that balance delay, complexity, and accuracy, an intelligent and effective transportation system can be established.

车载通信网络需要高效、准确的信道估计,以保证高移动性场景下可靠的数据交换。IEEE 802.11p标准被广泛认为是车对车(V2V)通信通道的基础;然而,它受到有限的导频资源和固定的导频结构的限制,这降低了传统估计技术的性能和有效性,特别是在动态环境中。深度学习的最新进展为解决这些问题提供了巨大的潜力,通过提高估计精度和建模复杂的通道动态。尽管基于深度学习的方法引入了计算复杂性和准确性的权衡,但这些都是延迟敏感的V2V场景中的关键限制。本文全面回顾了基于深度学习的信道估计技术,分析了IEEE 802.11p标准的方法,并严格检查了它们在经典方法和基于深度学习的方法中的局限性。此外,本文还重点介绍了IEEE 802.11bd引入的改进,它具有增强的导频结构和先进的调制方案,为自适应、高效的信道估计提供了更健壮的框架。通过确定平衡延迟、复杂性和准确性的未来研究路径,可以建立一个智能有效的交通系统。
{"title":"Deep Learning-Based Channel Estimation Techniques Using IEEE 802.11p Protocol, Limitations of IEEE 802.11p and Future Directions of IEEE 802.11bd: A Review.","authors":"Saveeta Bai, Jeff Kilby, Krishnamachar Prasad","doi":"10.3390/s26051658","DOIUrl":"10.3390/s26051658","url":null,"abstract":"<p><p>Vehicular communication networks demand highly efficient and accurate channel estimation to ensure reliable data exchange in high mobility scenarios. The IEEE 802.11p standard is widely regarded as the foundation of the Vehicle-to-Vehicle (V2V) communication channel; however, it is constrained by limited pilot resources and a fixed pilot structure, which degrade the performance and effectiveness of traditional estimation techniques, particularly in dynamic environments. Recent advances in deep learning offer significant potential for addressing these issues by improving estimation accuracy and modelling complex channel dynamics. Though deep learning-based methods introduce trade-offs in computational complexity and accuracy, these are crucial constraints in latency-sensitive V2V scenarios. This article presents a comprehensive review of deep learning-based channel estimation techniques, analysing methods for the IEEE 802.11p standard and critically examining their limitations in both classical and deep learning-based approaches. Additionally, the article highlights improvements introduced by IEEE 802.11bd, which features an enhanced pilot structure and advanced modulation schemes, providing a more robust framework for adaptive, efficient channel estimation. By identifying future research pathways that balance delay, complexity, and accuracy, an intelligent and effective transportation system can be established.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987099/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Directional Nearest Neighbor Distance-Based Algorithm for Signal Photon Extraction from Spaceborne Photon-Counting LiDAR in Shallow Waters. 基于方向最近邻距离的星载光子计数激光雷达信号光子提取算法。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051645
Shibin Zhao, Zhenwei Shi, Tingting Jin, Boxue Huang, Xiaokai Li, Hui Long

The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) employs a 532 nm laser with strong water-penetration capability, making it well suited for satellite-derived bathymetry in shallow waters; however, the effective denoising of photon-counting data remains essential due to strong solar background and intrinsic instrument noise. To address this challenge, this study proposes a novel photon denoising method, termed the Directional Nearest Neighbor Distance-based Algorithm (DNNDA), for robust extraction of signal photons from shallow-water ICESat-2 data. Unlike existing methods that rely heavily on density or terrain features and often degrade under high-noise conditions, DNNDA systematically exploits both scale-corrected spatial relationships and directional distribution characteristics of photons. By quantitatively characterizing the directional features of photon distributions and embedding this information into a density representation, DNNDA amplifies the density contrast between signal and noise photons, rendering the seafloor signal photons more distinct and easier to extract. An evaluation index was further designed to automate optimal parameter determination. Validation using multiple global ICESat-2 datasets demonstrates that DNNDA achieves superior seafloor photon extraction performance, with F1-scores exceeding 95%. Further regression analysis against high-precision CUDEM data in the Puerto Rico region yields root-mean-square errors below 0.57 m. By jointly correcting scale anisotropy and incorporating directional information, DNNDA enables reliable and adaptive signal photon extraction across local and global scales, providing a robust solution for shallow-water bathymetry in complex, high-noise environments.

冰、云和陆地高程卫星2号(ICESat-2)采用532 nm激光,具有很强的穿透水能力,非常适合在浅水进行卫星衍生的测深;然而,由于强烈的太阳背景和固有的仪器噪声,光子计数数据的有效去噪仍然是必不可少的。为了解决这一挑战,本研究提出了一种新的光子去噪方法,称为基于方向最近邻距离的算法(DNNDA),用于从ICESat-2浅水数据中鲁棒提取信号光子。与现有的严重依赖密度或地形特征且在高噪声条件下经常退化的方法不同,DNNDA系统地利用了尺度校正的空间关系和光子的方向分布特征。通过定量表征光子分布的方向特征并将这些信息嵌入到密度表示中,DNNDA放大了信号光子和噪声光子之间的密度对比,使海底信号光子更加清晰,更容易提取。进一步设计了评价指标,实现了最优参数的自动化确定。使用多个全球ICESat-2数据集进行验证表明,DNNDA具有优越的海底光子提取性能,f1得分超过95%。对波多黎各地区高精度CUDEM数据的进一步回归分析得出均方根误差低于0.57 m。通过联合校正尺度各向异性和结合方向信息,DNNDA能够在局部和全球尺度上实现可靠和自适应的信号光子提取,为复杂、高噪声环境下的浅水测深提供强大的解决方案。
{"title":"A Directional Nearest Neighbor Distance-Based Algorithm for Signal Photon Extraction from Spaceborne Photon-Counting LiDAR in Shallow Waters.","authors":"Shibin Zhao, Zhenwei Shi, Tingting Jin, Boxue Huang, Xiaokai Li, Hui Long","doi":"10.3390/s26051645","DOIUrl":"10.3390/s26051645","url":null,"abstract":"<p><p>The Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) employs a 532 nm laser with strong water-penetration capability, making it well suited for satellite-derived bathymetry in shallow waters; however, the effective denoising of photon-counting data remains essential due to strong solar background and intrinsic instrument noise. To address this challenge, this study proposes a novel photon denoising method, termed the Directional Nearest Neighbor Distance-based Algorithm (DNNDA), for robust extraction of signal photons from shallow-water ICESat-2 data. Unlike existing methods that rely heavily on density or terrain features and often degrade under high-noise conditions, DNNDA systematically exploits both scale-corrected spatial relationships and directional distribution characteristics of photons. By quantitatively characterizing the directional features of photon distributions and embedding this information into a density representation, DNNDA amplifies the density contrast between signal and noise photons, rendering the seafloor signal photons more distinct and easier to extract. An evaluation index was further designed to automate optimal parameter determination. Validation using multiple global ICESat-2 datasets demonstrates that DNNDA achieves superior seafloor photon extraction performance, with F1-scores exceeding 95%. Further regression analysis against high-precision CUDEM data in the Puerto Rico region yields root-mean-square errors below 0.57 m. By jointly correcting scale anisotropy and incorporating directional information, DNNDA enables reliable and adaptive signal photon extraction across local and global scales, providing a robust solution for shallow-water bathymetry in complex, high-noise environments.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12986567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Euglycemia: Case Studies Using Continuous Glucose Monitoring in Elite Athletes Without Diabetes During Record Athletic Events. 超越血糖:在无糖尿病的优秀运动员创纪录的运动项目中使用连续血糖监测的案例研究。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051624
Kristina Skroce, Lauren V Turner, Andrea Zignoli, David J Lipman, Howard C Zisser, Michael C Riddell

Glucose data regarding extreme elite performances in athletes without diabetes remains limited. The purpose is to characterize continuous glucose monitoring (CGM) responses in elite athletes across distinct high-performance contexts. This descriptive case series includes three separate elite athletes who used a CGM during their respective sporting events. The first is an ultra-endurance relay cycling world-record performance (Race Across the West, RAW), the second is a continuous high-intensity Everesting Challenge cycling record attempt, and the third is a maximal constant-weight no-fins breath-hold depth dive performed in international competition. Glycemic outcomes, as measured by CGM, included mean, maximum, and minimum glucose, glucose standard deviation (SD), and the percentage of time in tight glucose range (TITR: 70-140 mg/dL; 3.9-7.8 mmol/L), time below range (TBR: <70 mg/dL; <3.9 mmol/L), and time above range (TAR140: >140 mg/dL; >7.8 mmol/L). Other performance data, including peak power, heart rate, and lactate, are also provided where available. During the RAW challenge lasting 44 h and 20 min, mean glucose was 91 ± 23.2 mg/dL (mean ± SD) with 9.15% TBR and 35.58% TITR during cycling and 115 ± 24.7 mg/dL with 9.11% TBR and 43.16% TITR during resting periods. In contrast, the Everesting Challenge cycling record attempt demonstrated a persistently elevated glucose profile (160 ± 5.7 mg/dL), minimal variability (CV 3.5%), and 100% TAR140. Following the maximal breath-hold depth dive, interstitial glucose was 100% TAR140 during recovery (187 ± 18.5 mg/dL), alongside marked elevations in blood lactate concentrations (peak 13.4 mmol/L). The series of case studies demonstrate that substantial deviations from traditional euglycemic ranges are common during elite performance in athletes without diabetes. Interpretation of CGM data in athletic settings should therefore be performance- and context-specific rather than based on clinical glycemic thresholds.

关于无糖尿病运动员的优异表现的血糖数据仍然有限。目的是表征精英运动员在不同高性能环境下的连续血糖监测(CGM)反应。这个描述性案例系列包括三个独立的精英运动员,他们在各自的体育赛事中使用了CGM。第一个是超耐力接力自行车世界纪录的表现(Race Across The West, RAW),第二个是连续高强度的珠穆朗玛峰挑战自行车纪录的尝试,第三个是在国际比赛中进行的最大恒重无鳍憋气深度潜水。由CGM测量的血糖结局包括平均、最高和最低葡萄糖,葡萄糖标准偏差(SD),葡萄糖在紧密范围内的时间百分比(TITR: 70-140 mg/dL; 3.9-7.8 mmol/L),低于范围的时间百分比(TBR: 140 mg/dL; >7.8 mmol/L)。其他性能数据,包括峰值功率,心率和乳酸,也提供在可用的地方。在持续44 h和20 min的RAW刺激期间,循环期间平均葡萄糖为91±23.2 mg/dL (mean±SD), TBR为9.15%,TITR为35.58%;静息期间平均葡萄糖为115±24.7 mg/dL, TBR为9.11%,TITR为43.16%。相比之下,evertesting Challenge循环记录尝试显示持续升高的葡萄糖谱(160±5.7 mg/dL),最小的变异性(CV 3.5%)和100%的TAR140。在最大屏气深度潜水后,恢复期间间质葡萄糖为100% TAR140(187±18.5 mg/dL),同时血乳酸浓度显著升高(峰值13.4 mmol/L)。一系列的案例研究表明,在非糖尿病运动员的优秀表现中,与传统的正常血糖范围的显著偏离是很常见的。因此,对运动环境下CGM数据的解释应根据具体表现和具体情况,而不是基于临床血糖阈值。
{"title":"Beyond Euglycemia: Case Studies Using Continuous Glucose Monitoring in Elite Athletes Without Diabetes During Record Athletic Events.","authors":"Kristina Skroce, Lauren V Turner, Andrea Zignoli, David J Lipman, Howard C Zisser, Michael C Riddell","doi":"10.3390/s26051624","DOIUrl":"10.3390/s26051624","url":null,"abstract":"<p><p>Glucose data regarding extreme elite performances in athletes without diabetes remains limited. The purpose is to characterize continuous glucose monitoring (CGM) responses in elite athletes across distinct high-performance contexts. This descriptive case series includes three separate elite athletes who used a CGM during their respective sporting events. The first is an ultra-endurance relay cycling world-record performance (Race Across the West, RAW), the second is a continuous high-intensity Everesting Challenge cycling record attempt, and the third is a maximal constant-weight no-fins breath-hold depth dive performed in international competition. Glycemic outcomes, as measured by CGM, included mean, maximum, and minimum glucose, glucose standard deviation (SD), and the percentage of time in tight glucose range (TITR: 70-140 mg/dL; 3.9-7.8 mmol/L), time below range (TBR: <70 mg/dL; <3.9 mmol/L), and time above range (TAR140: >140 mg/dL; >7.8 mmol/L). Other performance data, including peak power, heart rate, and lactate, are also provided where available. During the RAW challenge lasting 44 h and 20 min, mean glucose was 91 ± 23.2 mg/dL (mean ± SD) with 9.15% TBR and 35.58% TITR during cycling and 115 ± 24.7 mg/dL with 9.11% TBR and 43.16% TITR during resting periods. In contrast, the Everesting Challenge cycling record attempt demonstrated a persistently elevated glucose profile (160 ± 5.7 mg/dL), minimal variability (CV 3.5%), and 100% TAR140. Following the maximal breath-hold depth dive, interstitial glucose was 100% TAR140 during recovery (187 ± 18.5 mg/dL), alongside marked elevations in blood lactate concentrations (peak 13.4 mmol/L). The series of case studies demonstrate that substantial deviations from traditional euglycemic ranges are common during elite performance in athletes without diabetes. Interpretation of CGM data in athletic settings should therefore be performance- and context-specific rather than based on clinical glycemic thresholds.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometry-Driven Phase Error Estimation for Azimuth Multi-Channel SAR via Global Radar Landmark Control Point Library. 基于全球雷达地标控制点库的方位多通道SAR几何驱动相位误差估计。
IF 3.5 3区 综合性期刊 Q2 CHEMISTRY, ANALYTICAL Pub Date : 2026-03-05 DOI: 10.3390/s26051622
Tingting Jin, Zheng Li, Feng Wang, Hui Long

Azimuth multi-channel synthetic aperture radar (SAR) is a core technology for achieving high-resolution wide-swath (HRWS) imaging. However, inter-channel phase inconsistency causes image amplitude distortion and phase accuracy degradation, which severely affects subsequent applications. Existing phase error estimation methods face specific limitations: the performance of subspace-based approaches degrades in complex scenes due to unreliable covariance matrix estimation, while conventional frequency-domain correlation methods rely on manual selection of strong scatterers, introducing inefficiency and subjectivity that precludes autonomous deployment. To address these issues, this paper proposes a geometry-driven inter-channel phase error estimation framework based on Global Radar Landmark Control Point Library (GRL-CP). The proposed framework replaces scene-dependent target selection with geometric-prior-driven control point activation. The GRL-CP library stores only the geodetic coordinates and scattering stability attributes of globally persistent radar landmarks, rather than image patches. For a new SAR acquisition, the echo position of these landmarks are predicted using a range-Doppler geometric model, enabling fully automatic and reliable control point activation. Based on the activated radar landmarks, inter-channel phase error is estimated using a frequency-domain correlation scheme. Experimental results on multi-channel spaceborne SAR datasets demonstrate that the proposed method achieves improved stability and accuracy under complex terrain scenarios.

方位多通道合成孔径雷达(SAR)是实现高分辨率宽幅(HRWS)成像的核心技术。然而,通道间相位不一致会导致图像幅度失真和相位精度降低,严重影响后续应用。现有的相位误差估计方法面临着特定的局限性:由于协方差矩阵估计不可靠,基于子空间的方法在复杂场景下的性能会下降,而传统的频域相关方法依赖于人工选择强散射体,引入了低效率和主观性,妨碍了自主部署。为了解决这些问题,本文提出了一种基于全球雷达地标控制点库(GRL-CP)的几何驱动信道间相位误差估计框架。该框架用几何先验驱动的控制点激活取代场景依赖的目标选择。GRL-CP库仅存储全球持久雷达地标的大地坐标和散射稳定性属性,而不存储图像补丁。对于新的SAR采集,使用距离-多普勒几何模型预测这些地标的回波位置,从而实现全自动和可靠的控制点激活。基于激活的雷达标志,采用频域相关方法估计信道间相位误差。在多通道星载SAR数据集上的实验结果表明,该方法在复杂地形条件下具有较高的稳定性和精度。
{"title":"Geometry-Driven Phase Error Estimation for Azimuth Multi-Channel SAR via Global Radar Landmark Control Point Library.","authors":"Tingting Jin, Zheng Li, Feng Wang, Hui Long","doi":"10.3390/s26051622","DOIUrl":"10.3390/s26051622","url":null,"abstract":"<p><p>Azimuth multi-channel synthetic aperture radar (SAR) is a core technology for achieving high-resolution wide-swath (HRWS) imaging. However, inter-channel phase inconsistency causes image amplitude distortion and phase accuracy degradation, which severely affects subsequent applications. Existing phase error estimation methods face specific limitations: the performance of subspace-based approaches degrades in complex scenes due to unreliable covariance matrix estimation, while conventional frequency-domain correlation methods rely on manual selection of strong scatterers, introducing inefficiency and subjectivity that precludes autonomous deployment. To address these issues, this paper proposes a geometry-driven inter-channel phase error estimation framework based on Global Radar Landmark Control Point Library (GRL-CP). The proposed framework replaces scene-dependent target selection with geometric-prior-driven control point activation. The GRL-CP library stores only the geodetic coordinates and scattering stability attributes of globally persistent radar landmarks, rather than image patches. For a new SAR acquisition, the echo position of these landmarks are predicted using a range-Doppler geometric model, enabling fully automatic and reliable control point activation. Based on the activated radar landmarks, inter-channel phase error is estimated using a frequency-domain correlation scheme. Experimental results on multi-channel spaceborne SAR datasets demonstrate that the proposed method achieves improved stability and accuracy under complex terrain scenarios.</p>","PeriodicalId":21698,"journal":{"name":"Sensors","volume":"26 5","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12987379/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147459793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sensors
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1