Pub Date : 2026-01-12DOI: 10.1109/LSENS.2026.3652223
Tzu-Hsien Sang;Tzu-Ching Lin
Spatial resolution is a critical factor in promoting the deployment of light detection and ranging (LiDAR). In our point-scanning single-photon avalanche diode (SPAD) LiDAR at NYCU, the field of view (FOV) of a pixel is designed to be larger than the pixel size to avoid miss-detection of small objects. Therefore, received multiple echoes in a pixel may contain the depth information for immediate neighboring pixels, and this can be exploited to generate high-resolution depth images. Existing upsampling/interpolation methods operate on single-echo depth data and may lead to incorrect depth information. In this letter, a SPAD LiDAR with an efficient algorithm is proposed to generate high-resolution (128 × 256) depth images with a relatively low-resolution (64 × 128) SPAD array. In the upsampling operation, the overlapping FOVs in the SPAD LiDAR are exploited to process multiecho data. In addition, a quantitative metric is developed to evaluate the image quality. Experiment results demonstrate that the proposed approach has better accuracy in depth information and enhances the visual effect of delineating objects in experimental scenes.
{"title":"High-Resolution LiDAR via Upsampling Multiecho Data Obtained From Low-Resolution SPAD Arrays","authors":"Tzu-Hsien Sang;Tzu-Ching Lin","doi":"10.1109/LSENS.2026.3652223","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3652223","url":null,"abstract":"Spatial resolution is a critical factor in promoting the deployment of light detection and ranging (LiDAR). In our point-scanning single-photon avalanche diode (SPAD) LiDAR at NYCU, the field of view (FOV) of a pixel is designed to be larger than the pixel size to avoid miss-detection of small objects. Therefore, received multiple echoes in a pixel may contain the depth information for immediate neighboring pixels, and this can be exploited to generate high-resolution depth images. Existing upsampling/interpolation methods operate on single-echo depth data and may lead to incorrect depth information. In this letter, a SPAD LiDAR with an efficient algorithm is proposed to generate high-resolution (128 × 256) depth images with a relatively low-resolution (64 × 128) SPAD array. In the upsampling operation, the overlapping FOVs in the SPAD LiDAR are exploited to process multiecho data. In addition, a quantitative metric is developed to evaluate the image quality. Experiment results demonstrate that the proposed approach has better accuracy in depth information and enhances the visual effect of delineating objects in experimental scenes.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LSENS.2026.3652230
Arinobu Niijima
Monitoring muscle tension unobtrusively is important for sports and music performance. Conventional approaches estimate bilateral arm exertion by placing electromyography (EMG) sensors on the arms; however, in sports the arms may strike players, risking sensor detachment or damage, and in artistic settings visible arm-mounted sensors may be socially unacceptable. To address these issues, I propose a method that estimates bilateral arm exertion using a single EMG sensor on the posterior neck. The approach leverages the kinetic-chain principle: when the arms tense, cervical muscles coactivate to stabilize posture. Using cervical EMG and machine-learning models, I classify arm tension and regress percent maximum voluntary contraction (%MVC) across both arms. I evaluated the method across five tasks including grip strength, golf putting, and piano playing. The method achieved a mean binary-classification accuracy of 76%. For regression of mean arm %MVC, it yielded an average RMSE of 10% and $R^{2}$ of 0.72.
{"title":"Neck EMG-Based Estimation of Upper-Limb Muscle Tension","authors":"Arinobu Niijima","doi":"10.1109/LSENS.2026.3652230","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3652230","url":null,"abstract":"Monitoring muscle tension unobtrusively is important for sports and music performance. Conventional approaches estimate bilateral arm exertion by placing electromyography (EMG) sensors on the arms; however, in sports the arms may strike players, risking sensor detachment or damage, and in artistic settings visible arm-mounted sensors may be socially unacceptable. To address these issues, I propose a method that estimates bilateral arm exertion using a single EMG sensor on the posterior neck. The approach leverages the kinetic-chain principle: when the arms tense, cervical muscles coactivate to stabilize posture. Using cervical EMG and machine-learning models, I classify arm tension and regress percent maximum voluntary contraction (%MVC) across both arms. I evaluated the method across five tasks including grip strength, golf putting, and piano playing. The method achieved a mean binary-classification accuracy of 76%. For regression of mean arm %MVC, it yielded an average RMSE of 10% and <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> of 0.72.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/LSENS.2025.3650702
Tao Cao;Hongfei Cao;Ang Chen;Xinglin Zhang;Shuchen Bai
To enhance the recognition performance of flexible tactile sensing systems in human–computer interaction, this letter proposes a tactile signal recognition method for polyvinylidene fluoride (PVDF) sensor arrays based on time-series imaging. In this study, a 4 × 4 PVDF sensor array was constructed for signal acquisition. The key innovation is the conversion of preprocessed voltage time series into two distinct image representations—Gramian Angular Field (GAF) and Markov Transition Field (MTF) images—to fully exploit the dynamic features of the signals. These images are then fed into a convolutional neural network (CNN) for end-to-end classification. Experimental results demonstrate that the proposed method achieves outstanding performance in contact state recognition, with both the GAF+CNN and MTF+CNN models exceeding 95% in accuracy, precision, recall, and F1-score. The MTF+CNN model shows a slight advantage in recall and F1-score. In terms of deployment, the system achieves millisecond-level single-sample inference latency on a general-purpose computing platform (MacBook Pro 2022, M2 chip), proving its potential for real-time practical applications. This work provides an effective solution for developing high-precision low-latency tactile sensing systems.
为了提高柔性触觉传感系统在人机交互中的识别性能,本文提出了一种基于时间序列成像的PVDF传感器阵列触觉信号识别方法。本研究构建了一个4 × 4 PVDF传感器阵列用于信号采集。关键的创新是将预处理电压时间序列转换为两种不同的图像表示-格拉曼角场(GAF)和马尔可夫过渡场(MTF)图像-以充分利用信号的动态特征。然后将这些图像输入卷积神经网络(CNN)进行端到端分类。实验结果表明,该方法在接触状态识别方面取得了较好的效果,GAF+CNN和MTF+CNN模型的准确率、精密度、召回率和f1分数均超过95%。MTF+CNN模型在召回率和f1得分上有轻微的优势。在部署方面,该系统在通用计算平台(MacBook Pro 2022, M2芯片)上实现了毫秒级的单样本推断延迟,证明了其实时实际应用的潜力。这项工作为开发高精度低延迟触觉传感系统提供了有效的解决方案。
{"title":"Tactile Recognition Using PVDF Sensor Arrays With Time-Series Image Encoding and CNN","authors":"Tao Cao;Hongfei Cao;Ang Chen;Xinglin Zhang;Shuchen Bai","doi":"10.1109/LSENS.2025.3650702","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3650702","url":null,"abstract":"To enhance the recognition performance of flexible tactile sensing systems in human–computer interaction, this letter proposes a tactile signal recognition method for polyvinylidene fluoride (PVDF) sensor arrays based on time-series imaging. In this study, a 4 × 4 PVDF sensor array was constructed for signal acquisition. The key innovation is the conversion of preprocessed voltage time series into two distinct image representations—Gramian Angular Field (GAF) and Markov Transition Field (MTF) images—to fully exploit the dynamic features of the signals. These images are then fed into a convolutional neural network (CNN) for end-to-end classification. Experimental results demonstrate that the proposed method achieves outstanding performance in contact state recognition, with both the GAF+CNN and MTF+CNN models exceeding 95% in accuracy, precision, recall, and F1-score. The MTF+CNN model shows a slight advantage in recall and F1-score. In terms of deployment, the system achieves millisecond-level single-sample inference latency on a general-purpose computing platform (MacBook Pro 2022, M2 chip), proving its potential for real-time practical applications. This work provides an effective solution for developing high-precision low-latency tactile sensing systems.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Confidence in the use of low-cost sensors (LCS), as a viable alternative to expensive research-grade instruments, is increasing for air quality monitoring due to low-budget and ease of deployment. However, numerous studies have suggested significant variations in their performances under varying environmental conditions, therefore highlighting the need of detailed evaluations and careful calibrations prior to their applications for the region of interest. Such studies have been relatively few in India and particularly lacking in the semiarid urban environments of western India. In this regard, we calibrated LCS for measurements of particle size distribution (OPC-N3) and ozone (O3) (Alphasense OXB4) utilizing reference-grade measurements (GRIMM, Thermo), and have evaluated the performance of LCS over Ahmedabad. For computing PM2.5, the corrections have been derived from the particle mass size distribution, which improved the accuracy significantly compared to the reference measurements (R2 ∼ 0.7, normalized mean absolute bias ∼ 29% ). O3 variability is calibrated using reference O3 and sensor-measured temperature and relative humidity, with the aid of machine learning. Measurements from the two O3 sensors showed good intercorrelation and agreement with the reference (R2 ∼ 0.7). Our study fills a gap of calibration and performance evaluation of LCSs in a distinct urban environment of western India and highlights the need for careful corrections in order to have reliable air quality measurements. LCS-based measurements were found to capture typical features of the urban air quality in this region, and therefore can be deployed to quantify trends and to understand the important factors governing aerosols and O3. The study can serve as a reference for future developments toward the low-cost comprehensive measurements of atmospheric composition, including other key air pollutants.
{"title":"Calibration and Performance Evaluation of Low-Cost Air Quality Sensors in an Urban Environment of Western India","authors":"Yash Dahima;Tapaswini Sarangi;Yogeshkumar Patel;Lokesh Kumar Sahu;Narendra Ojha;Aditya Vaishya","doi":"10.1109/LSENS.2026.3650975","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3650975","url":null,"abstract":"Confidence in the use of low-cost sensors (LCS), as a viable alternative to expensive research-grade instruments, is increasing for air quality monitoring due to low-budget and ease of deployment. However, numerous studies have suggested significant variations in their performances under varying environmental conditions, therefore highlighting the need of detailed evaluations and careful calibrations prior to their applications for the region of interest. Such studies have been relatively few in India and particularly lacking in the semiarid urban environments of western India. In this regard, we calibrated LCS for measurements of particle size distribution (OPC-N3) and ozone (O<sub>3</sub>) (Alphasense OXB4) utilizing reference-grade measurements (GRIMM, Thermo), and have evaluated the performance of LCS over Ahmedabad. For computing PM<sub>2.5</sub>, the corrections have been derived from the particle mass size distribution, which improved the accuracy significantly compared to the reference measurements (R<sup>2</sup> ∼ 0.7, normalized mean absolute bias ∼ 29% ). O<sub>3</sub> variability is calibrated using reference O<sub>3</sub> and sensor-measured temperature and relative humidity, with the aid of machine learning. Measurements from the two O<sub>3</sub> sensors showed good intercorrelation and agreement with the reference (R<sup>2</sup> ∼ 0.7). Our study fills a gap of calibration and performance evaluation of LCSs in a distinct urban environment of western India and highlights the need for careful corrections in order to have reliable air quality measurements. LCS-based measurements were found to capture typical features of the urban air quality in this region, and therefore can be deployed to quantify trends and to understand the important factors governing aerosols and O<sub>3</sub>. The study can serve as a reference for future developments toward the low-cost comprehensive measurements of atmospheric composition, including other key air pollutants.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/LSENS.2025.3650141
Vikas Kumar;Shivansh Awasthi;Vikash Sharma;Santosh Parajuli;Thomas George Thundat;Ankur Gupta
This letter proposes resonance-based sensing on large surfaces using 13.56 MHz RF excitation to enhance human safety in human–machine interaction (HMI). In this approach, whole large conducting surface itself functions as a sensor by varying its resonance characteristics in response to object interactions. The sensing system is compact in design and optimized for operation in the industrial, scientific, and medical band. The surface is excited using a single conductor powered resonant coil, which provides good passive voltage gain at −16 dBm input power. A secondary resonant coil is employed to amplify voltage variations caused by human touch on the surface. The system delivers a high voltage output signal that can be directly interfaced with a computing platform through an analog to digital converter, thereby eliminating the need for external amplifiers or analog filters. This technique offers a cost effective, low power, and large area sensing solution for human robot collaboration. The proposed system can be seamlessly integrated into machines or robotic platforms for sensing applications in HMI.
{"title":"Resonance-Based Sensing on Large Surfaces Using RF Excitation for Human–Machine Interaction","authors":"Vikas Kumar;Shivansh Awasthi;Vikash Sharma;Santosh Parajuli;Thomas George Thundat;Ankur Gupta","doi":"10.1109/LSENS.2025.3650141","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3650141","url":null,"abstract":"This letter proposes resonance-based sensing on large surfaces using 13.56 MHz RF excitation to enhance human safety in human–machine interaction (HMI). In this approach, whole large conducting surface itself functions as a sensor by varying its resonance characteristics in response to object interactions. The sensing system is compact in design and optimized for operation in the industrial, scientific, and medical band. The surface is excited using a single conductor powered resonant coil, which provides good passive voltage gain at −16 dBm input power. A secondary resonant coil is employed to amplify voltage variations caused by human touch on the surface. The system delivers a high voltage output signal that can be directly interfaced with a computing platform through an analog to digital converter, thereby eliminating the need for external amplifiers or analog filters. This technique offers a cost effective, low power, and large area sensing solution for human robot collaboration. The proposed system can be seamlessly integrated into machines or robotic platforms for sensing applications in HMI.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/LSENS.2025.3649802
Soongyu Kang;Yongchul Jung;Sewoon Oh;Yunho Jung
In this letter, we propose a prognostics and health management (PHM) method for permanent magnet synchronous motors (PMSMs) in urban air mobility. The method uses a Transformer and three-phase current sensors. The short-time Fourier transform is employed for current signals to capture fault-related information. The Transformer architecture effectively extracts both local and global features from time–frequency representations, enabling high classification performance. However, its high computational cost and large model size hinder deployment in practical industrial applications. To address this challenge, we designed a lightweight binary-weighted Transformer (BWT) for PMSM PHM, reducing the model size to 5.5% of the baseline. The proposed BWT achieves 99.81% classification accuracy across four classes. We also developed a hardware accelerator for matrix multiplication—the most time-consuming operation in BWT—and implemented it on a field-programmable gate array. The proposed SW/HW co-design achieved an 85.55× speedup over the software-only implementation on the ARM microprocessor unit.
{"title":"FPGA Implementation of Binary-Weighted Transformer for Prognostics and Health Management of Permanent Magnet Synchronous Motors Using Current Sensors","authors":"Soongyu Kang;Yongchul Jung;Sewoon Oh;Yunho Jung","doi":"10.1109/LSENS.2025.3649802","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3649802","url":null,"abstract":"In this letter, we propose a prognostics and health management (PHM) method for permanent magnet synchronous motors (PMSMs) in urban air mobility. The method uses a Transformer and three-phase current sensors. The short-time Fourier transform is employed for current signals to capture fault-related information. The Transformer architecture effectively extracts both local and global features from time–frequency representations, enabling high classification performance. However, its high computational cost and large model size hinder deployment in practical industrial applications. To address this challenge, we designed a lightweight binary-weighted Transformer (BWT) for PMSM PHM, reducing the model size to 5.5% of the baseline. The proposed BWT achieves 99.81% classification accuracy across four classes. We also developed a hardware accelerator for matrix multiplication—the most time-consuming operation in BWT—and implemented it on a field-programmable gate array. The proposed SW/HW co-design achieved an 85.55× speedup over the software-only implementation on the ARM microprocessor unit.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter presents a novel, integrated multimodal data acquisition system for simultaneously capturing the six degrees of freedom motion data and real-time visual information, designed primarily for ultrasound imaging applications. The system combines inertial measurement units, ArUco markers, and a video capture device to achieve high-accuracy motion tracking synchronized with real-time ultrasound imaging. Our approach provides a cost-effective and portable setup that is capable of recording accurate translational and rotational data. The experimental results demonstrate promising results with $1.95pm text{1.10},text{mm}$ deviation with the path driven by the robotic manipulation, which can be further improved by controlling acceleration and velocity. Its real-time performance, ease of use, and potential for AI model training make it valuable for various medical applications, including ultrasound-guided procedures and motion analysis.
{"title":"An Integrated Multimodal Data Acquisition System for Ultrasound Imaging","authors":"Gajendra Singh;Deepak Mishra;Jayant Kumar Mohanta;Rengarajan Rajagopal;Rahul Choudhary;Alok Kumar Sharma;Pushpinder Singh Khera","doi":"10.1109/LSENS.2025.3649236","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3649236","url":null,"abstract":"This letter presents a novel, integrated multimodal data acquisition system for simultaneously capturing the six degrees of freedom motion data and real-time visual information, designed primarily for ultrasound imaging applications. The system combines inertial measurement units, ArUco markers, and a video capture device to achieve high-accuracy motion tracking synchronized with real-time ultrasound imaging. Our approach provides a cost-effective and portable setup that is capable of recording accurate translational and rotational data. The experimental results demonstrate promising results with <inline-formula><tex-math>$1.95pm text{1.10},text{mm}$</tex-math></inline-formula> deviation with the path driven by the robotic manipulation, which can be further improved by controlling acceleration and velocity. Its real-time performance, ease of use, and potential for AI model training make it valuable for various medical applications, including ultrasound-guided procedures and motion analysis.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/LSENS.2025.3648946
Swarubini P J;Ryunosuke Kirita;Tomohiko Igasaki;Nagarajan Ganapathy
Cognitive load (CL) reflects the mental effort required during a task. Traditional CL assessment methods are intersubject and intrasubject variability and lack continuous monitoring. Recently, contactless biosignal sensing has emerged as an alternative for unobtrusive CL assessment. In this study, we propose a contactless CL assessment framework using imaging photoplethysmography (iPPG)-gaze signals and cross-modality-driven fusion to classify varied CL states. For this, facial videos are acquired from 23 healthy subjects in a semicontrolled environment. iPPG and gaze signals were extracted using the local group invariance method and MediaPipe library, respectively. The signals were segmented and applied to parallel 1-D convolutional neural network, and were fused using cross-modal attention. The proposed approach is able to discriminate between varied CL states. Experimental results show that the proposed fusion model achieved an average classification accuracy (ACC) of 67.96% and F-measure (F-m) of 71.69% outperforming single-modality models. The iPPG signals demonstrated a better mean ACC (55.66%) and F-m (60.71%) among the individual models. While electroencephalography and multimodal sensors report close to 60%–70% accuracy, our contactless method attains comparable performance using solely smartphone video. Thus, the proposed framework could be extended for real-time CL monitoring.
{"title":"Automated Multimodal Sensing for Cognitive Load Assessment Using Cross-Modality-Driven Attention Fusion","authors":"Swarubini P J;Ryunosuke Kirita;Tomohiko Igasaki;Nagarajan Ganapathy","doi":"10.1109/LSENS.2025.3648946","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3648946","url":null,"abstract":"Cognitive load (CL) reflects the mental effort required during a task. Traditional CL assessment methods are intersubject and intrasubject variability and lack continuous monitoring. Recently, contactless biosignal sensing has emerged as an alternative for unobtrusive CL assessment. In this study, we propose a contactless CL assessment framework using imaging photoplethysmography (iPPG)-gaze signals and cross-modality-driven fusion to classify varied CL states. For this, facial videos are acquired from 23 healthy subjects in a semicontrolled environment. iPPG and gaze signals were extracted using the local group invariance method and MediaPipe library, respectively. The signals were segmented and applied to parallel 1-D convolutional neural network, and were fused using cross-modal attention. The proposed approach is able to discriminate between varied CL states. Experimental results show that the proposed fusion model achieved an average classification accuracy (ACC) of 67.96% and F-measure (F-m) of 71.69% outperforming single-modality models. The iPPG signals demonstrated a better mean ACC (55.66%) and F-m (60.71%) among the individual models. While electroencephalography and multimodal sensors report close to 60%–70% accuracy, our contactless method attains comparable performance using solely smartphone video. Thus, the proposed framework could be extended for real-time CL monitoring.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1109/LSENS.2025.3648032
Puneet Pandey;Sandeep Joshi
Optical camera communication (OCC) enables low-cost optical wireless links using complementary metal-oxide-semiconductor image sensors but is vulnerable to passive eavesdropping. This letter proposes a lightweight physical-layer security framework that leverages sensor nonidealities—specifically bad-pixel maps and rolling-shutter exposure timing—to derive device-specific entropy sources. Unlike conventional pseudorandom number generators that rely on software-based random seeding, these hardware-bound entropy sources are nonreplicable across devices and thus significantly harder for an attacker to predict or replay. A logistic chaotic map seeded with these features generates binary key streams that pass standard randomness tests, while compressed sensing provides sparse-domain encoding to lower computational and transmission overhead. Secure data transmission is realized via xor-based stream ciphering with reconstructed keys. Simulations indicate a lower bit-error rate (BER) for legitimate receivers while mismatched keys yield eavesdropper BER $approx 0.5$. The estimated energy budget is 5.3 mW on a Cortex-M4-class platform, aligning with reported sub-mW visual sensor nodes and highlighting the suitability of the approach for Internet of Things and vehicular OCC systems.
{"title":"Sensor-Driven Entropy for Energy-Efficient Security in Optical Camera Communication Systems","authors":"Puneet Pandey;Sandeep Joshi","doi":"10.1109/LSENS.2025.3648032","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3648032","url":null,"abstract":"Optical camera communication (OCC) enables low-cost optical wireless links using complementary metal-oxide-semiconductor image sensors but is vulnerable to passive eavesdropping. This letter proposes a lightweight physical-layer security framework that leverages sensor nonidealities—specifically bad-pixel maps and rolling-shutter exposure timing—to derive device-specific entropy sources. Unlike conventional pseudorandom number generators that rely on software-based random seeding, these hardware-bound entropy sources are nonreplicable across devices and thus significantly harder for an attacker to predict or replay. A logistic chaotic map seeded with these features generates binary key streams that pass standard randomness tests, while compressed sensing provides sparse-domain encoding to lower computational and transmission overhead. Secure data transmission is realized via <sc>xor</small>-based stream ciphering with reconstructed keys. Simulations indicate a lower bit-error rate (BER) for legitimate receivers while mismatched keys yield eavesdropper BER <inline-formula><tex-math>$approx 0.5$</tex-math></inline-formula>. The estimated energy budget is 5.3 mW on a Cortex-M4-class platform, aligning with reported sub-mW visual sensor nodes and highlighting the suitability of the approach for Internet of Things and vehicular OCC systems.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}