首页 > 最新文献

IEEE Sensors Journal最新文献

英文 中文
MCL-3WDA: Cross-Domain Fault Diagnosis for Rotating Machine via Multichannel Vibration Data Based on Contrastive Learning and Fine-Grained Domain Alignment MCL-3WDA:基于对比学习和细粒度域对齐的多通道旋转机械振动数据跨域故障诊断
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-31 DOI: 10.1109/JSEN.2025.3625562
Ziyao Geng;Shihua Zhou;Tianzhuang Yu;Yulin Liu;Jianbo Ye;Ye Zhang;Zhaohui Ren
Rotating machinery fault diagnosis under varying operating conditions is challenged not only by domain shift and data scarcity but more critically by intrinsic algorithmic limitations in existing methods. Most current unsupervised domain adaptation (UDA) approaches rely on single-channel vibration signals, which lack the ability to capture interchannel dependencies and thus produce suboptimal feature representations. Furthermore, existing domain alignment strategies are typically coarse-grained, aligning only global distributions while neglecting channel-wise, hierarchical, and class-specific discrepancies. To overcome these challenges, this article proposes a novel method, named MCL-3WDA, which innovatively integrates contrastive learning (CL) with fine-grained domain alignment. First, a multiscale attention fusion feature extraction (MAFFE) layer is devised to construct more expressive and generalized feature representations through cross-scale interactions and hierarchical attention refinement. Second, drawing inspiration from CL, a multichannel contrastive learning strategy (MCL) is introduced to uncover latent associative dependencies embedded within multichannel signals, thereby substantially augmenting the model’s discriminative capacity for fault pattern recognition. Finally, a channel-wise, layer-wise, and class-wise domain alignment strategy (3WDA) is developed, which achieves precise cross-domain distribution alignment based on multikernel maximum mean discrepancy (MKMMD). Extensive experiments using two public datasets and one private dataset demonstrate that the proposed MCL-3WDA achieves superior performance with an average accuracy of 98.95% (ranging from 97.13% to 100.00%) across multiple cross-domain tasks, significantly outperforming existing methods.
旋转机械在不同工况下的故障诊断不仅受到领域漂移和数据稀缺性的挑战,更严重的是现有方法固有的算法局限性。目前大多数无监督域自适应(UDA)方法依赖于单通道振动信号,缺乏捕获通道间依赖关系的能力,从而产生次优特征表示。此外,现有的领域对齐策略通常是粗粒度的,只对齐全局分布,而忽略了通道、层次和特定于类的差异。为了克服这些挑战,本文提出了一种名为MCL-3WDA的新方法,该方法创新性地将对比学习(CL)与细粒度域对齐集成在一起。首先,设计了一种多尺度注意力融合特征提取(MAFFE)层,通过跨尺度交互和分层注意力细化来构建更具表现力和泛化的特征表示;其次,从多通道对比学习策略(MCL)中汲取灵感,引入多通道对比学习策略(MCL)来揭示嵌入在多通道信号中的潜在关联依赖,从而大大增强了模型对故障模式识别的判别能力。最后,提出了一种基于通道、层和类的域对齐策略(3WDA),实现了基于多核最大平均差异(MKMMD)的精确跨域分布对齐。使用两个公共数据集和一个私有数据集进行的大量实验表明,所提出的MCL-3WDA在多个跨域任务上的平均准确率为98.95%(范围为97.13%至100.00%),显著优于现有方法。
{"title":"MCL-3WDA: Cross-Domain Fault Diagnosis for Rotating Machine via Multichannel Vibration Data Based on Contrastive Learning and Fine-Grained Domain Alignment","authors":"Ziyao Geng;Shihua Zhou;Tianzhuang Yu;Yulin Liu;Jianbo Ye;Ye Zhang;Zhaohui Ren","doi":"10.1109/JSEN.2025.3625562","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3625562","url":null,"abstract":"Rotating machinery fault diagnosis under varying operating conditions is challenged not only by domain shift and data scarcity but more critically by intrinsic algorithmic limitations in existing methods. Most current unsupervised domain adaptation (UDA) approaches rely on single-channel vibration signals, which lack the ability to capture interchannel dependencies and thus produce suboptimal feature representations. Furthermore, existing domain alignment strategies are typically coarse-grained, aligning only global distributions while neglecting channel-wise, hierarchical, and class-specific discrepancies. To overcome these challenges, this article proposes a novel method, named MCL-3WDA, which innovatively integrates contrastive learning (CL) with fine-grained domain alignment. First, a multiscale attention fusion feature extraction (MAFFE) layer is devised to construct more expressive and generalized feature representations through cross-scale interactions and hierarchical attention refinement. Second, drawing inspiration from CL, a multichannel contrastive learning strategy (MCL) is introduced to uncover latent associative dependencies embedded within multichannel signals, thereby substantially augmenting the model’s discriminative capacity for fault pattern recognition. Finally, a channel-wise, layer-wise, and class-wise domain alignment strategy (3WDA) is developed, which achieves precise cross-domain distribution alignment based on multikernel maximum mean discrepancy (MKMMD). Extensive experiments using two public datasets and one private dataset demonstrate that the proposed MCL-3WDA achieves superior performance with an average accuracy of 98.95% (ranging from 97.13% to 100.00%) across multiple cross-domain tasks, significantly outperforming existing methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44994-45008"},"PeriodicalIF":4.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Sensors Council IEEE传感器委员会
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-31 DOI: 10.1109/JSEN.2025.3622430
{"title":"IEEE Sensors Council","authors":"","doi":"10.1109/JSEN.2025.3622430","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3622430","url":null,"abstract":"","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"C3-C3"},"PeriodicalIF":4.3,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11223173","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CREN-RLC: Clustering-Based Adaptive Security With Regression Learning for IoT-WSNs 基于聚类和回归学习的物联网wsns自适应安全
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-27 DOI: 10.1109/JSEN.2025.3620211
Nishant Chaurasia;Prashant Kumar
The rapid growth of Internet of Things–wireless sensor networks (IoT-WSNs) brings numerous security challenges, particularly in environments where devices have limited resources and cannot sustain heavy or complex security methods. This article introduces clustering with residual energy and neighbor analysis-regression learning classifier (CREN-RLC), a lightweight, adaptive security framework explicitly designed for IoT-WSNs. The framework integrates CREN—which organizes sensor nodes into energy-aware clusters based on their residual energy and communication patterns—with a RLC that detects and adapts to intrusions in real time. While CREN ensures balanced energy utilization and efficient anomaly detection, RLC leverages historical data to recognize evolving attack types, thereby improving resilience against diverse threats. Implemented in Python 3.12 and evaluated on benchmark datasets, CREN-RLC achieved strong results, including a classification accuracy of 94.38%, precision of 93.41%, recall of 92.86%, and an F 1-score of 92.27%, outperforming conventional neural and deep learning (DL) approaches. Moreover, the framework maintained high network efficiency, achieving low packet drop rates, forwarding ratios of up to 0.982, and over 95.6% attack prevention accuracy even under heavy attack conditions. By combining energy-aware clustering with intelligent, lightweight detection, CREN-RLC delivers a scalable, energyefficient, and robust security solution suitable for real-world IoT-WSN applications, including smart cities, healthcare, industrial automation, and intelligent transportation.
物联网无线传感器网络(iot - wsn)的快速发展带来了许多安全挑战,特别是在设备资源有限且无法承受重型或复杂安全方法的环境中。本文介绍了带有剩余能量的聚类和邻居分析回归学习分类器(CREN-RLC),这是一种专为iot - wsn设计的轻量级自适应安全框架。该框架将基于剩余能量和通信模式将传感器节点组织成能量感知集群的cren与实时检测和适应入侵的RLC集成在一起。CREN确保平衡的能源利用和高效的异常检测,而RLC利用历史数据来识别不断发展的攻击类型,从而提高对各种威胁的弹性。在Python 3.12中实现并在基准数据集上进行评估后,CREN-RLC取得了较好的结果,包括分类准确率为94.38%,精密度为93.41%,召回率为92.86%,F - 1得分为92.27%,优于传统的神经和深度学习(DL)方法。此外,该框架保持了较高的网络效率,丢包率低,转发率高达0.982,即使在重攻击条件下,攻击防护准确率也在95.6%以上。通过将能量感知集群与智能、轻量级检测相结合,CREN-RLC提供了适用于现实世界IoT-WSN应用的可扩展、高效且强大的安全解决方案,包括智慧城市、医疗保健、工业自动化和智能交通。
{"title":"CREN-RLC: Clustering-Based Adaptive Security With Regression Learning for IoT-WSNs","authors":"Nishant Chaurasia;Prashant Kumar","doi":"10.1109/JSEN.2025.3620211","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3620211","url":null,"abstract":"The rapid growth of Internet of Things–wireless sensor networks (IoT-WSNs) brings numerous security challenges, particularly in environments where devices have limited resources and cannot sustain heavy or complex security methods. This article introduces clustering with residual energy and neighbor analysis-regression learning classifier (CREN-RLC), a lightweight, adaptive security framework explicitly designed for IoT-WSNs. The framework integrates CREN—which organizes sensor nodes into energy-aware clusters based on their residual energy and communication patterns—with a RLC that detects and adapts to intrusions in real time. While CREN ensures balanced energy utilization and efficient anomaly detection, RLC leverages historical data to recognize evolving attack types, thereby improving resilience against diverse threats. Implemented in Python 3.12 and evaluated on benchmark datasets, CREN-RLC achieved strong results, including a classification accuracy of 94.38%, precision of 93.41%, recall of 92.86%, and an F 1-score of 92.27%, outperforming conventional neural and deep learning (DL) approaches. Moreover, the framework maintained high network efficiency, achieving low packet drop rates, forwarding ratios of up to 0.982, and over 95.6% attack prevention accuracy even under heavy attack conditions. By combining energy-aware clustering with intelligent, lightweight detection, CREN-RLC delivers a scalable, energyefficient, and robust security solution suitable for real-world IoT-WSN applications, including smart cities, healthcare, industrial automation, and intelligent transportation.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 24","pages":"44984-44993"},"PeriodicalIF":4.3,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Hybrid CNN–BiLSTM Approach for Wildlife Detection Nearby Railway Track in a Forest 森林轨道附近野生动物检测的CNN-BiLSTM混合方法
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-23 DOI: 10.1109/JSEN.2025.3622306
D. S. Parihar;Ripul Ghosh
Wildlife conflict has become a serious concern due to increasing animal mortality from rail-induced accidents on railway tracks passing through the forest region. Monitoring the movement of wild animals near a railway track remains challenging due to the complex terrain, varied landscapes, and diverse biodiversity. This article presents an optimized hybrid 1-D convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture to classify wildlife and other ground activities from seismic data generated in a forest environment. The proposed method automatically searches the high-level patterns sequentially from the multidomain features that are extracted from the principal modes of variational mode decomposition (VMD) of seismic signals. Furthermore, the classification results are compared with the standalone CNN and BiLSTM, where the proposed method outperforms with an average accuracy of 78.11 ± 4.28% and the lowest false detection rate.
由于在穿过森林地区的铁路轨道上发生的铁路事故导致动物死亡率上升,野生动物冲突已成为一个严重的问题。由于复杂的地形、多样的景观和多样化的生物多样性,监测铁路轨道附近野生动物的运动仍然具有挑战性。本文提出了一种优化的混合一维卷积神经网络-双向长短期记忆(CNN-BiLSTM)架构,用于从森林环境中产生的地震数据中分类野生动物和其他地面活动。该方法从地震信号变分模态分解(VMD)的主模态提取的多域特征中,自动按顺序搜索高层模式。此外,将分类结果与独立的CNN和BiLSTM进行了比较,发现本文方法的平均准确率为78.11±4.28%,误检率最低。
{"title":"A Hybrid CNN–BiLSTM Approach for Wildlife Detection Nearby Railway Track in a Forest","authors":"D. S. Parihar;Ripul Ghosh","doi":"10.1109/JSEN.2025.3622306","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3622306","url":null,"abstract":"Wildlife conflict has become a serious concern due to increasing animal mortality from rail-induced accidents on railway tracks passing through the forest region. Monitoring the movement of wild animals near a railway track remains challenging due to the complex terrain, varied landscapes, and diverse biodiversity. This article presents an optimized hybrid 1-D convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture to classify wildlife and other ground activities from seismic data generated in a forest environment. The proposed method automatically searches the high-level patterns sequentially from the multidomain features that are extracted from the principal modes of variational mode decomposition (VMD) of seismic signals. Furthermore, the classification results are compared with the standalone CNN and BiLSTM, where the proposed method outperforms with an average accuracy of 78.11 ± 4.28% and the lowest false detection rate.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 23","pages":"43507-43515"},"PeriodicalIF":4.3,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based SNAP Microresonator Displacement Sensing Technology 基于深度学习的SNAP微谐振器位移传感技术
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-20 DOI: 10.1109/JSEN.2025.3621436
Shuai Zhang;Yongchao Dong;Shihao Huang;Gaoping Xu;Ruizhou Wang;Han Wang;Mengyu Wang
Whispering gallery mode (WGM) microresonators have shown great potential for precise displacement measurement due to their compact size, ultrahigh sensitivity, and rapid response. However, traditional WGM-based displacement sensors are susceptible to environmental noise interference, resulting in reduced accuracy and too long signal demodulation time. To address these limitations, this article proposes a multimodal displacement sensing method for surface nanoscale axial photonics (SNAPs) resonators based on deep learning (DL) techniques. A 1-D convolutional neural network (1D-CNN) is used to extract features from the full spectrum, which significantly improves the noise immunity and sensing accuracy while avoiding the time-consuming spectral preprocessing. Experimental results show that the average prediction error is as low as 0.05 μm and the maximum error does not exceed 1.4 μm when using the 1D-CNN network for displacement measurements. This work provides an effective solution for fast, highly accurate and robust displacement sensing.
低语通道模式(WGM)微谐振器由于其紧凑的尺寸、超高的灵敏度和快速的响应,在精确位移测量方面显示出巨大的潜力。然而,传统的基于wgm的位移传感器容易受到环境噪声的干扰,导致精度降低,信号解调时间过长。为了解决这些限制,本文提出了一种基于深度学习(DL)技术的表面纳米尺度轴向光子(SNAPs)谐振器的多模态位移传感方法。采用一维卷积神经网络(1D-CNN)对全频谱进行特征提取,在避免耗时的频谱预处理的同时,显著提高了噪声抗扰性和传感精度。实验结果表明,使用1D-CNN网络进行位移测量时,平均预测误差低至0.05 μm,最大误差不超过1.4 μm。这项工作为快速、高精度和鲁棒的位移传感提供了有效的解决方案。
{"title":"Deep Learning-Based SNAP Microresonator Displacement Sensing Technology","authors":"Shuai Zhang;Yongchao Dong;Shihao Huang;Gaoping Xu;Ruizhou Wang;Han Wang;Mengyu Wang","doi":"10.1109/JSEN.2025.3621436","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3621436","url":null,"abstract":"Whispering gallery mode (WGM) microresonators have shown great potential for precise displacement measurement due to their compact size, ultrahigh sensitivity, and rapid response. However, traditional WGM-based displacement sensors are susceptible to environmental noise interference, resulting in reduced accuracy and too long signal demodulation time. To address these limitations, this article proposes a multimodal displacement sensing method for surface nanoscale axial photonics (SNAPs) resonators based on deep learning (DL) techniques. A 1-D convolutional neural network (1D-CNN) is used to extract features from the full spectrum, which significantly improves the noise immunity and sensing accuracy while avoiding the time-consuming spectral preprocessing. Experimental results show that the average prediction error is as low as 0.05 μm and the maximum error does not exceed 1.4 μm when using the 1D-CNN network for displacement measurements. This work provides an effective solution for fast, highly accurate and robust displacement sensing.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 23","pages":"43500-43506"},"PeriodicalIF":4.3,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MonoICT: A Monocular 3-D Object Detection Model Integrating CNN and Transformer MonoICT:一种集成CNN和Transformer的单目三维目标检测模型
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-17 DOI: 10.1109/JSEN.2025.3578608
Xingqi Na;Zhijia Zhang;Huaici Zhao;Shujun Jia
In the field of autonomous driving, 3-D object detection is a crucial technology. Visual sensors are essential in this area and are widely used for 3-D object detection tasks. Recent advancements in monocular 3-D object detection have introduced depth estimation branches within the network architecture. This integration leverages predicted depth information to address the depth perception limitations inherent in monocular sensors, thereby improving detection accuracy. However, many existing methods prioritize lightweight designs at the expense of depth estimation accuracy. To enhance this accuracy, we propose the pseudo depth feature extraction (PDFE) module. This module extracts features by fusing adaptive scale information and simulating disparity, leading to more precise depth predictions. Additionally, we present a hybrid model that combines convolutional neural networks (CNNs) and Transformer architectures. The model employs diverse feature fusion strategies, including depth-guided fusion (DGF) and a Transformer decoder. It also utilizes a convolutional mixture transformer (CMT) encoder to enhance the representation of both local and global features. Building on these innovations, we developed the MonoICT network model and evaluated its performance using the KITTI dataset. Our experimental results indicate that our approach is competitive with recent state-of-the-art methods, outperforming them in the pedestrian and cyclist categories.
在自动驾驶领域,三维目标检测是一项关键技术。视觉传感器在这一领域至关重要,并广泛用于三维目标检测任务。单目三维目标检测的最新进展在网络架构中引入了深度估计分支。这种集成利用预测深度信息来解决单目传感器固有的深度感知限制,从而提高检测精度。然而,许多现有方法以牺牲深度估计精度为代价优先考虑轻量化设计。为了提高这种精度,我们提出了伪深度特征提取(PDFE)模块。该模块通过融合自适应尺度信息和模拟视差来提取特征,从而实现更精确的深度预测。此外,我们提出了一个结合卷积神经网络(cnn)和Transformer架构的混合模型。该模型采用了多种特征融合策略,包括深度引导融合(DGF)和Transformer解码器。它还利用卷积混合变压器(CMT)编码器来增强局部和全局特征的表示。在这些创新的基础上,我们开发了MonoICT网络模型,并使用KITTI数据集评估其性能。我们的实验结果表明,我们的方法与最近最先进的方法相比具有竞争力,在行人和骑自行车的类别中表现优于它们。
{"title":"MonoICT: A Monocular 3-D Object Detection Model Integrating CNN and Transformer","authors":"Xingqi Na;Zhijia Zhang;Huaici Zhao;Shujun Jia","doi":"10.1109/JSEN.2025.3578608","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3578608","url":null,"abstract":"In the field of autonomous driving, 3-D object detection is a crucial technology. Visual sensors are essential in this area and are widely used for 3-D object detection tasks. Recent advancements in monocular 3-D object detection have introduced depth estimation branches within the network architecture. This integration leverages predicted depth information to address the depth perception limitations inherent in monocular sensors, thereby improving detection accuracy. However, many existing methods prioritize lightweight designs at the expense of depth estimation accuracy. To enhance this accuracy, we propose the pseudo depth feature extraction (PDFE) module. This module extracts features by fusing adaptive scale information and simulating disparity, leading to more precise depth predictions. Additionally, we present a hybrid model that combines convolutional neural networks (CNNs) and Transformer architectures. The model employs diverse feature fusion strategies, including depth-guided fusion (DGF) and a Transformer decoder. It also utilizes a convolutional mixture transformer (CMT) encoder to enhance the representation of both local and global features. Building on these innovations, we developed the MonoICT network model and evaluated its performance using the KITTI dataset. Our experimental results indicate that our approach is competitive with recent state-of-the-art methods, outperforming them in the pedestrian and cyclist categories.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40763-40774"},"PeriodicalIF":4.3,"publicationDate":"2025-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Fiber Optic Shape Sensing Models for Minimally Invasive Prostate Needle Procedures Using OFDR Data. 利用OFDR数据评估微创前列腺穿刺的光纤形状传感模型。
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-16 DOI: 10.1109/jsen.2025.3620154
Jacynthe Francoeur, Raman Kashyap, Samuel Kadoury, Jin Seob Kim, Iulian Iordachita

This paper presents a systematic evaluation of fiber optic shape sensing models for prostate needle interventions using a single needle embedded with a three-fiber optical frequency domain reflectometry (OFDR) sensor. Two reconstruction algorithms were evaluated: (1) Linear Interpolation Models (LIM), a geometric method that directly estimates local curvature and orientation from distributed strain measurements, and (2) the Lie-Group Theoretic Model (LGTM), a physics-informed elastic-rod model that globally fits curvature profiles while accounting for tissue-needle interaction. Using software-defined strain-point selection, both sparse and quasi-distributed sensing configurations were emulated from the same OFDR data. Experiments were conducted in homogeneous and two-layer gel phantoms, ex vivo tissue, and a whole-body cadaveric pig model. While the repeated-measures ANOVA did not detect any significant differences, the Friedman test analysis revealed statistically significant differences in RMSEs between LIM and LGTM (p < 0.05), with LIM outperforming LGTM in the ex vivo tissue scenario. LIM also achieved over 50-fold faster computation (< 1 ms vs. > 40 ms per shape), enabling real-time use. These findings highlight the trade-offs between model complexity, sensing density, computational load, and tissue variability, providing guidance for selecting shape-sensing strategies in clinical and robotic needle interventions.

本文提出了一个系统的评估光纤形状传感模型用于前列腺针干预使用单针嵌入三光纤频域反射(OFDR)传感器。评估了两种重建算法:(1)线性插值模型(LIM),一种直接从分布应变测量中估计局部曲率和方向的几何方法;(2)李群理论模型(LGTM),一种考虑组织-针相互作用的物理信息的弹性杆模型,全局拟合曲率剖面。利用软件定义的应变点选择,对同一OFDR数据进行了稀疏和准分布式传感配置仿真。实验分别在均质和双层凝胶模型、离体组织和猪全身尸体模型中进行。虽然重复测量方差分析没有发现任何显着差异,但Friedman检验分析显示LIM和LGTM之间的均数误差有统计学意义(p < 0.05), LIM在离体组织场景中优于LGTM。LIM还实现了超过50倍的计算速度(每个形状小于1 ms,而bbb40 ms),实现了实时使用。这些发现强调了模型复杂性、传感密度、计算负荷和组织可变性之间的权衡,为临床和机器人针头干预中选择形状传感策略提供了指导。
{"title":"Evaluation of Fiber Optic Shape Sensing Models for Minimally Invasive Prostate Needle Procedures Using OFDR Data.","authors":"Jacynthe Francoeur, Raman Kashyap, Samuel Kadoury, Jin Seob Kim, Iulian Iordachita","doi":"10.1109/jsen.2025.3620154","DOIUrl":"10.1109/jsen.2025.3620154","url":null,"abstract":"<p><p>This paper presents a systematic evaluation of fiber optic shape sensing models for prostate needle interventions using a single needle embedded with a three-fiber optical frequency domain reflectometry (OFDR) sensor. Two reconstruction algorithms were evaluated: (1) Linear Interpolation Models (LIM), a geometric method that directly estimates local curvature and orientation from distributed strain measurements, and (2) the Lie-Group Theoretic Model (LGTM), a physics-informed elastic-rod model that globally fits curvature profiles while accounting for tissue-needle interaction. Using software-defined strain-point selection, both sparse and quasi-distributed sensing configurations were emulated from the same OFDR data. Experiments were conducted in homogeneous and two-layer gel phantoms, <i>ex vivo</i> tissue, and a whole-body cadaveric pig model. While the repeated-measures ANOVA did not detect any significant differences, the Friedman test analysis revealed statistically significant differences in RMSEs between LIM and LGTM (p < 0.05), with LIM outperforming LGTM in the <i>ex vivo</i> tissue scenario. LIM also achieved over 50-fold faster computation (< 1 ms vs. > 40 ms per shape), enabling real-time use. These findings highlight the trade-offs between model complexity, sensing density, computational load, and tissue variability, providing guidance for selecting shape-sensing strategies in clinical and robotic needle interventions.</p>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":" ","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12588074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145457292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Relay Cluster Head Based Traffic and Energy-Aware Routing Protocol for Heterogeneous WSNs 基于中继簇头和能量感知的异构wsn路由协议
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-16 DOI: 10.1109/JSEN.2025.3620015
Simanta Das;Ripudaman Singh
Distributed clustering routing protocols are acknowledged as effective methods for minimizing and balancing energy consumption in wireless sensor networks (WSNs). In these protocols, the random distribution of cluster heads (CHs) results in the presence of several isolated sensor nodes (ISNs). In general, an ISN consumes more energy than a cluster member (CM) sensor node (SN). Therefore, ISNs located far from the sink can significantly reduce the network lifetime. In this article, we propose a relay cluster head based traffic and energy-aware routing (RCHBTEAR) protocol for heterogeneous WSNs. The RCHBTEAR protocol improves the network lifetime by reducing the energy consumption of SNs. For this, we consider both the energy and traffic heterogeneities of SNs during the election of CHs. Furthermore, we select relay CHs (RCHs) from the existing CHs to reduce the energy consumption of ISNs located far from the sink. Furthermore, we propose an optimized super round (SR) technique that eliminates the need for reclustering in every round. Simulation results show that the RCHBTEAR protocol significantly improves the network lifetime.
分布式聚类路由协议是实现无线传感器网络能量消耗最小化和平衡的有效方法。在这些协议中,簇头(CHs)的随机分布导致存在多个孤立的传感器节点(isn)。一般情况下,一个ISN节点比一个CM (cluster member)传感器节点消耗更多的能量。因此,远离汇聚节点的isn可以显著减少网络的生存期。在本文中,我们提出了一种基于中继簇头的流量和能量感知路由(RCHBTEAR)协议用于异构wsn。RCHBTEAR协议通过降低SNs的能耗来提高网络的生存时间。为此,我们考虑了SNs在CHs选举期间的能量和流量异质性。此外,我们从现有的中继CHs (RCHs)中选择中继CHs (RCHs),以减少远离接收器的isn的能量消耗。此外,我们提出了一种优化的超级轮(SR)技术,消除了每轮重新聚类的需要。仿真结果表明,RCHBTEAR协议显著提高了网络的生存时间。
{"title":"A Relay Cluster Head Based Traffic and Energy-Aware Routing Protocol for Heterogeneous WSNs","authors":"Simanta Das;Ripudaman Singh","doi":"10.1109/JSEN.2025.3620015","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3620015","url":null,"abstract":"Distributed clustering routing protocols are acknowledged as effective methods for minimizing and balancing energy consumption in wireless sensor networks (WSNs). In these protocols, the random distribution of cluster heads (CHs) results in the presence of several isolated sensor nodes (ISNs). In general, an ISN consumes more energy than a cluster member (CM) sensor node (SN). Therefore, ISNs located far from the sink can significantly reduce the network lifetime. In this article, we propose a relay cluster head based traffic and energy-aware routing (RCHBTEAR) protocol for heterogeneous WSNs. The RCHBTEAR protocol improves the network lifetime by reducing the energy consumption of SNs. For this, we consider both the energy and traffic heterogeneities of SNs during the election of CHs. Furthermore, we select relay CHs (RCHs) from the existing CHs to reduce the energy consumption of ISNs located far from the sink. Furthermore, we propose an optimized super round (SR) technique that eliminates the need for reclustering in every round. Simulation results show that the RCHBTEAR protocol significantly improves the network lifetime.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42350-42363"},"PeriodicalIF":4.3,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Motion Recognition Based on Videos and Radar Spectrograms in Cross-Target Scenarios 基于视频和雷达频谱图的跨目标人体运动识别
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-15 DOI: 10.1109/JSEN.2025.3619651
Yang Yang;Yue Song;Xiaochun Shang;Qingshuang Mu;Beichen Li;Yue Lang
Multisensor fusion combines the benefits of each sensor, resulting in a thorough and reliable motion recognition even in challenging measurement environments. Meanwhile, even with the environmental robustness attained through sensor integration, the recognition model continues to face challenges in cross-target scenarios. In summary, the recognition model is consistently trained using the measurement dataset; however, its performance may decline when applied to unfamiliar subjects. This article highlights this issue and presents a cross-target human motion recognition model for the radar–camera measurement system. We have developed a modal-specific semantic interaction mechanism that allows the feature extractor to recognize different individuals, thereby removing identity information during the feature extraction process. Furthermore, we have also put forward a meta-prototype learning scheme that suitably adjusts the probability distribution to enhance the generalization capability of the recognition model. To emphasize, the proposed model is implemented without altering the primary designed network architecture, indicating that there is no additional computational burden during testing. In comparison with five multimodal learning algorithms, we have validated the effectiveness of our model, highlighting that it surpasses previous radar–video-based methods by more than 5% in recognition accuracy. Through experiments using public datasets under different dataset conditions, we verified the generalization ability of our model. Ablation studies and additional parameter studies have been conducted, enabling a thorough examination of each design.
多传感器融合结合了每个传感器的优点,即使在具有挑战性的测量环境中也能实现全面可靠的运动识别。同时,即使通过传感器集成获得了环境鲁棒性,识别模型仍然面临跨目标场景的挑战。综上所述,识别模型是使用测量数据集连续训练的;然而,当应用于不熟悉的科目时,其性能可能会下降。本文针对这一问题,提出了一种用于雷达-相机测量系统的跨目标人体运动识别模型。我们开发了一种特定于模态的语义交互机制,允许特征提取器识别不同的个体,从而在特征提取过程中去除身份信息。此外,我们还提出了一种适当调整概率分布的元原型学习方案,以提高识别模型的泛化能力。需要强调的是,所提出的模型是在没有改变主要设计的网络架构的情况下实现的,这表明在测试期间没有额外的计算负担。通过与五种多模态学习算法的比较,我们验证了我们模型的有效性,强调它比以前基于雷达视频的方法识别准确率高出5%以上。通过不同数据集条件下的公共数据集实验,验证了模型的泛化能力。已经进行了烧蚀研究和其他参数研究,从而能够对每个设计进行彻底的检查。
{"title":"Human Motion Recognition Based on Videos and Radar Spectrograms in Cross-Target Scenarios","authors":"Yang Yang;Yue Song;Xiaochun Shang;Qingshuang Mu;Beichen Li;Yue Lang","doi":"10.1109/JSEN.2025.3619651","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3619651","url":null,"abstract":"Multisensor fusion combines the benefits of each sensor, resulting in a thorough and reliable motion recognition even in challenging measurement environments. Meanwhile, even with the environmental robustness attained through sensor integration, the recognition model continues to face challenges in cross-target scenarios. In summary, the recognition model is consistently trained using the measurement dataset; however, its performance may decline when applied to unfamiliar subjects. This article highlights this issue and presents a cross-target human motion recognition model for the radar–camera measurement system. We have developed a modal-specific semantic interaction mechanism that allows the feature extractor to recognize different individuals, thereby removing identity information during the feature extraction process. Furthermore, we have also put forward a meta-prototype learning scheme that suitably adjusts the probability distribution to enhance the generalization capability of the recognition model. To emphasize, the proposed model is implemented without altering the primary designed network architecture, indicating that there is no additional computational burden during testing. In comparison with five multimodal learning algorithms, we have validated the effectiveness of our model, highlighting that it surpasses previous radar–video-based methods by more than 5% in recognition accuracy. Through experiments using public datasets under different dataset conditions, we verified the generalization ability of our model. Ablation studies and additional parameter studies have been conducted, enabling a thorough examination of each design.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 22","pages":"42400-42412"},"PeriodicalIF":4.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145500454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Sensors Council IEEE传感器委员会
IF 4.3 2区 综合性期刊 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-15 DOI: 10.1109/JSEN.2025.3617592
{"title":"IEEE Sensors Council","authors":"","doi":"10.1109/JSEN.2025.3617592","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3617592","url":null,"abstract":"","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 20","pages":"C3-C3"},"PeriodicalIF":4.3,"publicationDate":"2025-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11204752","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145290243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Sensors Journal
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1