Pub Date : 2026-01-22DOI: 10.1109/LSENS.2026.3656713
Terry YP Yuen;Zhu-Hao Hsiao;Tzu-Han Wen
Conventional optical time-domain reflectometry (OTDR) suffers from event and attenuation dead zones when strong Fresnel reflections saturate the receiver, obscuring closely spaced events and degrading localization accuracy. High-performance OTDRs mitigate these issues by using ultrashort pulses, high-bandwidth detectors, and low-noise front ends, but at the expense of increased cost and calibration complexity. This work introduces a hybrid deep learning framework that enhances the sensing capabilities of a low-cost OTDR without modifying its hardware. An experimental dataset of 2150 traces was collected from polymer optical fibers subjected to controlled microbending loads at variable separation distances. The proposed model fuses waveform- and feature-based representations through convolutional, bidirectional long short-term memory, and attention encoders to resolve overlapping events within OTDR dead zones. It achieves 100% event-count classification and subdecimeter localization accuracy (mean absolute error < 0.09 m), providing measurable performance gains relative to conventional signal interpretation. These results demonstrate that data-driven OTDR evaluation can reduce ambiguity in dead zones and extend the practical functionality of low-cost distributed optical sensors, thereby supporting the development of intelligent cost-effective monitoring systems.
{"title":"Hybrid Deep Learning Model for Resolving Overlapping Events in OTDR Dead Zones","authors":"Terry YP Yuen;Zhu-Hao Hsiao;Tzu-Han Wen","doi":"10.1109/LSENS.2026.3656713","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656713","url":null,"abstract":"Conventional optical time-domain reflectometry (OTDR) suffers from event and attenuation dead zones when strong Fresnel reflections saturate the receiver, obscuring closely spaced events and degrading localization accuracy. High-performance OTDRs mitigate these issues by using ultrashort pulses, high-bandwidth detectors, and low-noise front ends, but at the expense of increased cost and calibration complexity. This work introduces a hybrid deep learning framework that enhances the sensing capabilities of a low-cost OTDR without modifying its hardware. An experimental dataset of 2150 traces was collected from polymer optical fibers subjected to controlled microbending loads at variable separation distances. The proposed model fuses waveform- and feature-based representations through convolutional, bidirectional long short-term memory, and attention encoders to resolve overlapping events within OTDR dead zones. It achieves 100% event-count classification and subdecimeter localization accuracy (mean absolute error < 0.09 m), providing measurable performance gains relative to conventional signal interpretation. These results demonstrate that data-driven OTDR evaluation can reduce ambiguity in dead zones and extend the practical functionality of low-cost distributed optical sensors, thereby supporting the development of intelligent cost-effective monitoring systems.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing presence of antibiotic pollutants, particularly sulfamethoxazole (SMX), in water sources necessitates the development of highly sensitive and selective detection methods. In this study, the presented work is a current versus voltage (I-V) sensor based on MXene/zinc oxide (ZnO) composite, which outperforms MXene in detecting SMX with sensitivity. The sensor is fabricated by spin-coating MXene, ZnO, and ZnO-MXene composite films onto a flexible polyethylene terephthalate (PET) substrate with an integrated conductive layer. The electrical response of the device is analyzed using I-V characterization under varying SMX concentrations, demonstrating that pristine. The sensitivity of MXene/ZnO composite 1.44 × 10-5 A/μg is attained by the compositing MXene and ZnO, which increases 11 times to the pure Mxene's sensitivity 1.29 × 10-6 A/μg. This is achieved by the active site created by ZnO on the MXene sheets. The results highlight MXene/ZnO composite potential as a next-generation material for sensing applications, providing a promising alternative for real-time and on-site water quality monitoring.
{"title":"Analysis of Mxene and Mxene/ZnO Composite Based I-V Sensing for Antibiotic Detection","authors":"Seyadu Abuthahir Peer;Manikandan Mayilmurugan;Raj Yuthika;Manimaran Lavanya Priyadharshini;Manikandan Esakkimuthu","doi":"10.1109/LSENS.2026.3656930","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656930","url":null,"abstract":"The increasing presence of antibiotic pollutants, particularly sulfamethoxazole (SMX), in water sources necessitates the development of highly sensitive and selective detection methods. In this study, the presented work is a current versus voltage (I-V) sensor based on MXene/zinc oxide (ZnO) composite, which outperforms MXene in detecting SMX with sensitivity. The sensor is fabricated by spin-coating MXene, ZnO, and ZnO-MXene composite films onto a flexible polyethylene terephthalate (PET) substrate with an integrated conductive layer. The electrical response of the device is analyzed using I-V characterization under varying SMX concentrations, demonstrating that pristine. The sensitivity of MXene/ZnO composite 1.44 × 10-5 A/μg is attained by the compositing MXene and ZnO, which increases 11 times to the pure Mxene's sensitivity 1.29 × 10-6 A/μg. This is achieved by the active site created by ZnO on the MXene sheets. The results highlight MXene/ZnO composite potential as a next-generation material for sensing applications, providing a promising alternative for real-time and on-site water quality monitoring.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optic disc and cup are important structures of human eye and the deformities occurring to these two regions lead to an irreversible disease called glaucoma. Accurate segmentation and analysis are one of the methods to diagnose glaucoma. In this letter, we introduce SA-U-KAN, a novel deep learning architecture that combines convolutional feature extractors, spatial attention modules, and Kolmogorov–Arnold networks (KANs) with the U-Net. The encoder stage of the SA-U-KAN comprises convolutional blocks with spatial attention to extract and refine local features. In addition to that, at the bottleneck stage, a KAN-based tokenization mechanism is used to model complex nonlinearities through interpretable univariate function decompositions. Finally, in the decoder stage, segmentation maps are constructed using skip connections along with attention module to preserve multiscale information. By fusing spatial attention and KAN, SAU-KAN is able to effectively capture local textures and global structures. Experimental results demonstrate the superiority of SAU-KAN over existing techniques, yielding improvements of 1.5% in Dice score (DS) and 2% in intersection of union (IoU) on the RIMONE dataset, and 3.5% (DS) and 4.5% (IoU) on the DRISHTI dataset with 6.9G FLOPs.
视盘和视杯是人眼的重要结构,这两个区域的畸形会导致一种不可逆转的疾病——青光眼。准确的分割分析是诊断青光眼的方法之一。在这封信中,我们介绍了SA-U-KAN,这是一种新颖的深度学习架构,它将卷积特征提取器、空间注意模块和Kolmogorov-Arnold网络(KANs)与U-Net结合在一起。SA-U-KAN的编码器阶段包括具有空间注意的卷积块,以提取和细化局部特征。除此之外,在瓶颈阶段,通过可解释的单变量函数分解,使用基于kan的标记化机制来建模复杂的非线性。最后,在解码器阶段,使用跳跃连接和注意模块构建分割图,以保持多尺度信息。通过融合空间注意力和KAN, su -KAN能够有效地捕获局部纹理和全局结构。实验结果表明,与现有技术相比,su - kan在RIMONE数据集上的Dice score (DS)提高了1.5%,union交集(IoU)提高了2%,在DRISHTI数据集上的DS提高了3.5%,IoU提高了4.5%,FLOPs为6.9G。
{"title":"SA-U-KAN: Spatial Attention Guided Kolmogorov–Arnold Networks for Optic Disc and Cup Segmentation","authors":"Preity;Ayushi Shukla;Ashish Kumar Bhandari;Syed Shahnawazuddin","doi":"10.1109/LSENS.2026.3656677","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656677","url":null,"abstract":"Optic disc and cup are important structures of human eye and the deformities occurring to these two regions lead to an irreversible disease called glaucoma. Accurate segmentation and analysis are one of the methods to diagnose glaucoma. In this letter, we introduce SA-U-KAN, a novel deep learning architecture that combines convolutional feature extractors, spatial attention modules, and Kolmogorov–Arnold networks (KANs) with the U-Net. The encoder stage of the SA-U-KAN comprises convolutional blocks with spatial attention to extract and refine local features. In addition to that, at the bottleneck stage, a KAN-based tokenization mechanism is used to model complex nonlinearities through interpretable univariate function decompositions. Finally, in the decoder stage, segmentation maps are constructed using skip connections along with attention module to preserve multiscale information. By fusing spatial attention and KAN, SAU-KAN is able to effectively capture local textures and global structures. Experimental results demonstrate the superiority of SAU-KAN over existing techniques, yielding improvements of 1.5% in Dice score (DS) and 2% in intersection of union (IoU) on the RIMONE dataset, and 3.5% (DS) and 4.5% (IoU) on the DRISHTI dataset with 6.9G FLOPs.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate and consistent grading is important for quality control, but manual tasting is subjective and hard to scale. We present a compact, fully automated system that predicts a two-digit valuation grade: the first digit is Body (liquor strength) and the second is Zing (briskness), each scored 0–5. It combines spectral imaging with a total dissolved solids (TDS) reading to capture both physical and chemical cues. We improve data quality by processing images in stages: segmenting the sample at a reference wavelength using adaptive K-means, applying a circular mask, running a second pass, and removing low-confidence boundary pixels. To capture clean local signals, we introduce an automatic non-overlapping bounding-box method for particulate made-tea valuation with spectral imaging. We fuse per-box spectra with TDS and train machine learning models; on a test set, a multilayer perceptron reaches 95.2% accuracy and a support vector machine performs similarly. Compared to fixed-region baselines, signal-to-noise ratio rises by 12.4 dB, within-class variance falls by 18.7%, background contamination drops from 14.6% to 0.9%, and rescan repeatability improves ($r=0.97$ versus 0.91; all $p< 0.01$). The system runs in 402 ms per sample on a desktop-class CPU, suiting factory use. Strong region of interest isolation and low-noise features boost classifier performance, enabling accurate, repeatable, and scalable grading.
准确和一致的分级对质量控制很重要,但手工品尝是主观的,很难衡量。我们提出了一个紧凑的全自动系统,预测两位数的评估等级:第一个数字是Body(酒的强度),第二个是Zing(轻快度),每个评分为0-5。它结合了光谱成像和总溶解固体(TDS)读数来捕捉物理和化学线索。我们通过分阶段处理图像来提高数据质量:使用自适应K-means在参考波长上分割样本,应用圆形掩模,运行第二遍,并去除低置信度的边界像素。为了捕获干净的局部信号,我们引入了一种自动无重叠边界盒方法,用于颗粒泡茶的光谱成像评估。我们将每盒光谱与TDS融合并训练机器学习模型;在测试集上,多层感知机的准确率达到95.2%,支持向量机的准确率与之相似。与固定区域基线相比,信噪比提高了12.4 dB,类内方差下降了18.7%,背景污染从14.6%下降到0.9%,重新扫描的可重复性提高(r=0.97$ vs 0.91;均为0.01$)。该系统在桌面级CPU上运行每个样本的时间为402毫秒,适合工厂使用。强大的兴趣区域隔离和低噪声特性提高分类器性能,实现准确,可重复和可扩展的分级。
{"title":"Quality Assessment and Valuation of Made-tea Using ROI Segmentation and Spectral–TDS Fusion","authors":"Sanket Junagade;Swagatam Bose Choudhury;Sanat Sarangi;Dineshkumar Singh","doi":"10.1109/LSENS.2026.3656628","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656628","url":null,"abstract":"Accurate and consistent grading is important for quality control, but manual tasting is subjective and hard to scale. We present a compact, fully automated system that predicts a two-digit valuation grade: the first digit is <italic>Body</i> (liquor strength) and the second is <italic>Zing</i> (briskness), each scored 0–5. It combines spectral imaging with a total dissolved solids (TDS) reading to capture both physical and chemical cues. We improve data quality by processing images in stages: segmenting the sample at a reference wavelength using adaptive K-means, applying a circular mask, running a second pass, and removing low-confidence boundary pixels. To capture clean local signals, we introduce an automatic non-overlapping bounding-box method for particulate made-tea valuation with spectral imaging. We fuse per-box spectra with TDS and train machine learning models; on a test set, a multilayer perceptron reaches 95.2% accuracy and a support vector machine performs similarly. Compared to fixed-region baselines, signal-to-noise ratio rises by 12.4 dB, within-class variance falls by 18.7%, background contamination drops from 14.6% to 0.9%, and rescan repeatability improves (<inline-formula><tex-math>$r=0.97$</tex-math></inline-formula> versus 0.91; all <inline-formula><tex-math>$p< 0.01$</tex-math></inline-formula>). The system runs in 402 ms per sample on a desktop-class CPU, suiting factory use. Strong region of interest isolation and low-noise features boost classifier performance, enabling accurate, repeatable, and scalable grading.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/LSENS.2026.3656613
Vikash Ranjan;Prasenjit Basak;Shailesh Kumar
Sensor-based moisture monitoring in transformer oil is needed for preserving transformer health and preventing failures. This work reports the development and response of a humidity sensor fabricated using Indian anthracite coal-derived graphene oxide (AC-GO) as the sensing material, a novel approach for moisture monitoring in transformer oil. AC-GO is synthesized using a one-pot technique. The screen-printed electrode (AgCl) is used to offer a highly conductive platform on a glass substrate for the fabrication of a sensor. The behavior of the sensor represents both capacitive and impedance response with respect to a change in relative humidity (% RH), allowing effective moisture detection. By using graphene oxide derived from anthracite coal, the sensor provides a high surface area and excellent electronic properties, which together contribute sensor’s sensitivity. The sensor is tested in a transformer oil environment for moisture sensing across a wide range of frequencies and temperatures, which consistently delivers robust performance and reliability. The sensor shows excellent repeatability and long-term stability. Experimental results show that noticeable change in both capacitance and impedance as % RH levels and temperature changes, offering the sensor’s strong ability to monitor moisture accurately. These results confirm the sensor’s performance for industrial applications, especially for oil-filled transformers. The sensor’s response under varying % RH (5% –90% RH) and different transformer oil temperatures (20 °C–110 °C) at different frequencies is thoroughly evaluated. It highlights its potential for deployment in real-world applications, particularly for transformer condition monitoring.
{"title":"Design and Fabrication of Anthracite Coal-Derived Graphene Oxide Humidity Sensor for Moisture Sensing in Transformer Oil","authors":"Vikash Ranjan;Prasenjit Basak;Shailesh Kumar","doi":"10.1109/LSENS.2026.3656613","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656613","url":null,"abstract":"Sensor-based moisture monitoring in transformer oil is needed for preserving transformer health and preventing failures. This work reports the development and response of a humidity sensor fabricated using Indian anthracite coal-derived graphene oxide (AC-GO) as the sensing material, a novel approach for moisture monitoring in transformer oil. AC-GO is synthesized using a one-pot technique. The screen-printed electrode (AgCl) is used to offer a highly conductive platform on a glass substrate for the fabrication of a sensor. The behavior of the sensor represents both capacitive and impedance response with respect to a change in relative humidity (% RH), allowing effective moisture detection. By using graphene oxide derived from anthracite coal, the sensor provides a high surface area and excellent electronic properties, which together contribute sensor’s sensitivity. The sensor is tested in a transformer oil environment for moisture sensing across a wide range of frequencies and temperatures, which consistently delivers robust performance and reliability. The sensor shows excellent repeatability and long-term stability. Experimental results show that noticeable change in both capacitance and impedance as % RH levels and temperature changes, offering the sensor’s strong ability to monitor moisture accurately. These results confirm the sensor’s performance for industrial applications, especially for oil-filled transformers. The sensor’s response under varying % RH (5% –90% RH) and different transformer oil temperatures (20 °C–110 °C) at different frequencies is thoroughly evaluated. It highlights its potential for deployment in real-world applications, particularly for transformer condition monitoring.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/LSENS.2026.3656187
Feifan Lu;Zhihuo Xu;Hongyan Chen;Jingjing Wu;Yuexia Wang
Falls are a major cause of injury, particularly among older adults. Most existing methods detect falls only after they occur, limiting their preventive value. This letter proposes a proactive fall prevention framework based on human pose forecasting using deep sequential learning. Two models are developed: an attention-based long short-term memory (LSTM) network for stable short prediction and a Transformer for long spatiotemporal modeling. Both forecast future 2-D skeletal trajectories from past poses to enable early warnings. A composite structural loss ensures anatomical coherence and motion smoothness. Experiments on a multiview outdoor dataset show that the Attention-based LSTM maintains stable, anatomically consistent predictions, while the Transformer generalizes better under multiview conditions but drifts in frontal views. These results highlight the potential of attention-driven forecasting for real-time fall prevention.
{"title":"Deep Sequential Learning for Pose Forecasting","authors":"Feifan Lu;Zhihuo Xu;Hongyan Chen;Jingjing Wu;Yuexia Wang","doi":"10.1109/LSENS.2026.3656187","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656187","url":null,"abstract":"Falls are a major cause of injury, particularly among older adults. Most existing methods detect falls only after they occur, limiting their preventive value. This letter proposes a proactive fall prevention framework based on human pose forecasting using deep sequential learning. Two models are developed: an attention-based long short-term memory (LSTM) network for stable short prediction and a Transformer for long spatiotemporal modeling. Both forecast future 2-D skeletal trajectories from past poses to enable early warnings. A composite structural loss ensures anatomical coherence and motion smoothness. Experiments on a multiview outdoor dataset show that the Attention-based LSTM maintains stable, anatomically consistent predictions, while the Transformer generalizes better under multiview conditions but drifts in frontal views. These results highlight the potential of attention-driven forecasting for real-time fall prevention.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1109/LSENS.2026.3656319
Budiman P. A. Rohman;Masahiko Nishimoto;Kohichi Ogata
Continuous human vital sign monitoring is essential for medical purpose. To make this system possible to be easily and widely applied, low manufacturing costs are preferred. Besides, to maintain the patient's comfort, noncontact monitoring is recommended. Therefore, this letter proposes a noncontact respiration monitoring system employing an ultra-low-cost continuous wave radar. An integration with a signal processing technique to extract human vital signs with high accuracy has been proposed by employing several processing steps that work sequentially, including Hilbert transform and variational mode decomposition. The experimental evaluations using various target ranges, respiration rates, and strengths confirm the reliability and accuracy of the proposed method. These indicate that the proposed system is feasible enough to be applied in real applications with appropriate integration.
{"title":"Feasibility Evaluation of Respiration Monitoring Using Ultra-Low-Cost Radar","authors":"Budiman P. A. Rohman;Masahiko Nishimoto;Kohichi Ogata","doi":"10.1109/LSENS.2026.3656319","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656319","url":null,"abstract":"Continuous human vital sign monitoring is essential for medical purpose. To make this system possible to be easily and widely applied, low manufacturing costs are preferred. Besides, to maintain the patient's comfort, noncontact monitoring is recommended. Therefore, this letter proposes a noncontact respiration monitoring system employing an ultra-low-cost continuous wave radar. An integration with a signal processing technique to extract human vital signs with high accuracy has been proposed by employing several processing steps that work sequentially, including Hilbert transform and variational mode decomposition. The experimental evaluations using various target ranges, respiration rates, and strengths confirm the reliability and accuracy of the proposed method. These indicate that the proposed system is feasible enough to be applied in real applications with appropriate integration.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1109/LSENS.2026.3656286
Kaveti Pavan;P Satyajith Chary;Ankit Singh;Digvijay S. Pawar;Nagarajan Ganapathy
Driver inattention detection is crucial for road safety, as stress can impair cognitive functions and increase accident risk. Recent advances in wearable technology have led to an increase in the use of multimodal physiological signals for driver inattention detection. Integrating attention mechanisms into these systems has shown promise in enhancing inattention detection. However, attention features can be affected by noise in the data, presenting a significant challenge. To address this, we propose a multimodal differential self-attention-based 1-D convolutional neural network (MDSA-1DCNN) to reduce noise in attention features. In this study, we evaluate the effectiveness of MDSA-1DCNN on multimodal 1-D biosignals obtained from textile electrodes, collecting single-lead electrocardiogram (256 Hz) and respiration (128 Hz) data from 15 healthy participants in two driving states: normal and inattention. The raw data were divided into nonoverlapping segments of 10, 15, and 20 s and preprocessed using the Neurokit Toolbox. These processed segments from each modality were then passed through convolutional blocks to extract temporal features. Self-attention was applied to these features, followed by a differential attention layer to reduce noise. The resulting features were fed into dense layers to classify driver inattention state. The proposed MDSA-1DCNN approach achieved a weighted F-score of 73.23$%$ and an average accuracy of 78.51$%$ on the validation set using leave-one-subject-out cross-validation. Future work will explore the utilization of data from multiple sensors and investigate sensor fusion techniques.
{"title":"Differential Self-Attention in 1-D CNNs for Driver Inattention Detection Using Multimodal Biosignals","authors":"Kaveti Pavan;P Satyajith Chary;Ankit Singh;Digvijay S. Pawar;Nagarajan Ganapathy","doi":"10.1109/LSENS.2026.3656286","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3656286","url":null,"abstract":"Driver inattention detection is crucial for road safety, as stress can impair cognitive functions and increase accident risk. Recent advances in wearable technology have led to an increase in the use of multimodal physiological signals for driver inattention detection. Integrating attention mechanisms into these systems has shown promise in enhancing inattention detection. However, attention features can be affected by noise in the data, presenting a significant challenge. To address this, we propose a multimodal differential self-attention-based 1-D convolutional neural network (MDSA-1DCNN) to reduce noise in attention features. In this study, we evaluate the effectiveness of MDSA-1DCNN on multimodal 1-D biosignals obtained from textile electrodes, collecting single-lead electrocardiogram (256 Hz) and respiration (128 Hz) data from 15 healthy participants in two driving states: normal and inattention. The raw data were divided into nonoverlapping segments of 10, 15, and 20 s and preprocessed using the Neurokit Toolbox. These processed segments from each modality were then passed through convolutional blocks to extract temporal features. Self-attention was applied to these features, followed by a differential attention layer to reduce noise. The resulting features were fed into dense layers to classify driver inattention state. The proposed MDSA-1DCNN approach achieved a weighted F-score of 73.23<inline-formula><tex-math>$%$</tex-math></inline-formula> and an average accuracy of 78.51<inline-formula><tex-math>$%$</tex-math></inline-formula> on the validation set using leave-one-subject-out cross-validation. Future work will explore the utilization of data from multiple sensors and investigate sensor fusion techniques.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wi-Fi-based human gesture recognition (HGR) leverages motion-induced perturbations in wireless sensing systems, enabling intelligent and contact-free monitoring. However, existing Wi-Fi-based HGR approaches often suffer from inconsistent cross-scene feature representations, leading to significant performance degradation. To overcome this issue, this letter proposes a channel state information (CSI)-body-coordinate velocity profile (BVP) dual-feature fusion network (CBDFFNet). CBDFFNet employs a heterogeneous feature extraction pipeline together with a cross-representation fusion mechanism to effectively exploit the complementary characteristics of the two representations. Specifically, CSI tensors are processed by a 2-D convolutional neural network (CNN) with residual connections, enhanced through squeeze-and-excitation attention and multiscale feature fusion, while BVP features are refined via a lightweight 3-D CNN with depthwise separable convolutions and temporal attention. Building on these representations, a hybrid fusion strategy combining cross-modal attention, graph-based feature fusion, and adaptive weight learning is introduced to construct a multidomain feature classifier. Extensive experiments on the large-scale Widar 3.0 dataset demonstrate that CBDFFNet consistently outperforms state-of-the-art methods in gesture recognition accuracy and robustness across diverse environments, highlighting its potential for robust, device-free intelligent sensing applications.
{"title":"Wi-Fi-Based Human Gesture Recognition via CSI–BVP Dual-Feature Fusion Network","authors":"Jian You;JunJie Yang;Chao Yang;Cheng Luo;Zhilang Peng","doi":"10.1109/LSENS.2026.3655777","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3655777","url":null,"abstract":"Wi-Fi-based human gesture recognition (HGR) leverages motion-induced perturbations in wireless sensing systems, enabling intelligent and contact-free monitoring. However, existing Wi-Fi-based HGR approaches often suffer from inconsistent cross-scene feature representations, leading to significant performance degradation. To overcome this issue, this letter proposes a channel state information (CSI)-body-coordinate velocity profile (BVP) dual-feature fusion network (CBDFFNet). CBDFFNet employs a heterogeneous feature extraction pipeline together with a cross-representation fusion mechanism to effectively exploit the complementary characteristics of the two representations. Specifically, CSI tensors are processed by a 2-D convolutional neural network (CNN) with residual connections, enhanced through squeeze-and-excitation attention and multiscale feature fusion, while BVP features are refined via a lightweight 3-D CNN with depthwise separable convolutions and temporal attention. Building on these representations, a hybrid fusion strategy combining cross-modal attention, graph-based feature fusion, and adaptive weight learning is introduced to construct a multidomain feature classifier. Extensive experiments on the large-scale Widar 3.0 dataset demonstrate that CBDFFNet consistently outperforms state-of-the-art methods in gesture recognition accuracy and robustness across diverse environments, highlighting its potential for robust, device-free intelligent sensing applications.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1109/LSENS.2026.3655018
Mohammed Hadhi Pazhaya Puthanveettil;Siri Chandana Amarakonda;Subho Dasgupta
Advancements in smart sensing technologies for biomedical, agriculture, pharmaceuticals, and the Internet of Things (IoT) have driven a growing demand for large-scale sensor production. Many such applications require limited operational lifetimes, making biodegradable transient sensors a promising route toward sustainable, ecofriendly systems. In this work, we present a humidity sensor in which the chitosan-polyvinyl alcohol substrate not only provides mechanical support but also serves as the sensing layer with MXene as conducting interdigitated electrodes. The film dissolves completely in water within one day, enabling transient operation, while the MXene-based electrodes can be recovered and reused. The sensor exhibits a clear response to relative humidity in the range of 18%–68% relative humidity (RH), with a sensitivity of 528% at 68% RH. In addition, breath monitoring experiments demonstrate its potential for biosensing applications. Biodegradability tests confirm complete degradation of the substrate in soil, water, and acidified water, along with successful recycling of the MXene electrodes. This study demonstrates a sustainable strategy for transient, recyclable, and ecofriendly humidity sensors with practical applications in smart and green electronics.
{"title":"Water Soluble Flexible Substrate-Based Humidity Sensor for Transient Sensing","authors":"Mohammed Hadhi Pazhaya Puthanveettil;Siri Chandana Amarakonda;Subho Dasgupta","doi":"10.1109/LSENS.2026.3655018","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3655018","url":null,"abstract":"Advancements in smart sensing technologies for biomedical, agriculture, pharmaceuticals, and the Internet of Things (IoT) have driven a growing demand for large-scale sensor production. Many such applications require limited operational lifetimes, making biodegradable transient sensors a promising route toward sustainable, ecofriendly systems. In this work, we present a humidity sensor in which the chitosan-polyvinyl alcohol substrate not only provides mechanical support but also serves as the sensing layer with MXene as conducting interdigitated electrodes. The film dissolves completely in water within one day, enabling transient operation, while the MXene-based electrodes can be recovered and reused. The sensor exhibits a clear response to relative humidity in the range of 18%–68% relative humidity (RH), with a sensitivity of 528% at 68% RH. In addition, breath monitoring experiments demonstrate its potential for biosensing applications. Biodegradability tests confirm complete degradation of the substrate in soil, water, and acidified water, along with successful recycling of the MXene electrodes. This study demonstrates a sustainable strategy for transient, recyclable, and ecofriendly humidity sensors with practical applications in smart and green electronics.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}