Pub Date : 2026-01-12DOI: 10.1109/LSENS.2026.3652223
Tzu-Hsien Sang;Tzu-Ching Lin
Spatial resolution is a critical factor in promoting the deployment of light detection and ranging (LiDAR). In our point-scanning single-photon avalanche diode (SPAD) LiDAR at NYCU, the field of view (FOV) of a pixel is designed to be larger than the pixel size to avoid miss-detection of small objects. Therefore, received multiple echoes in a pixel may contain the depth information for immediate neighboring pixels, and this can be exploited to generate high-resolution depth images. Existing upsampling/interpolation methods operate on single-echo depth data and may lead to incorrect depth information. In this letter, a SPAD LiDAR with an efficient algorithm is proposed to generate high-resolution (128 × 256) depth images with a relatively low-resolution (64 × 128) SPAD array. In the upsampling operation, the overlapping FOVs in the SPAD LiDAR are exploited to process multiecho data. In addition, a quantitative metric is developed to evaluate the image quality. Experiment results demonstrate that the proposed approach has better accuracy in depth information and enhances the visual effect of delineating objects in experimental scenes.
{"title":"High-Resolution LiDAR via Upsampling Multiecho Data Obtained From Low-Resolution SPAD Arrays","authors":"Tzu-Hsien Sang;Tzu-Ching Lin","doi":"10.1109/LSENS.2026.3652223","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3652223","url":null,"abstract":"Spatial resolution is a critical factor in promoting the deployment of light detection and ranging (LiDAR). In our point-scanning single-photon avalanche diode (SPAD) LiDAR at NYCU, the field of view (FOV) of a pixel is designed to be larger than the pixel size to avoid miss-detection of small objects. Therefore, received multiple echoes in a pixel may contain the depth information for immediate neighboring pixels, and this can be exploited to generate high-resolution depth images. Existing upsampling/interpolation methods operate on single-echo depth data and may lead to incorrect depth information. In this letter, a SPAD LiDAR with an efficient algorithm is proposed to generate high-resolution (128 × 256) depth images with a relatively low-resolution (64 × 128) SPAD array. In the upsampling operation, the overlapping FOVs in the SPAD LiDAR are exploited to process multiecho data. In addition, a quantitative metric is developed to evaluate the image quality. Experiment results demonstrate that the proposed approach has better accuracy in depth information and enhances the visual effect of delineating objects in experimental scenes.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1109/LSENS.2026.3652230
Arinobu Niijima
Monitoring muscle tension unobtrusively is important for sports and music performance. Conventional approaches estimate bilateral arm exertion by placing electromyography (EMG) sensors on the arms; however, in sports the arms may strike players, risking sensor detachment or damage, and in artistic settings visible arm-mounted sensors may be socially unacceptable. To address these issues, I propose a method that estimates bilateral arm exertion using a single EMG sensor on the posterior neck. The approach leverages the kinetic-chain principle: when the arms tense, cervical muscles coactivate to stabilize posture. Using cervical EMG and machine-learning models, I classify arm tension and regress percent maximum voluntary contraction (%MVC) across both arms. I evaluated the method across five tasks including grip strength, golf putting, and piano playing. The method achieved a mean binary-classification accuracy of 76%. For regression of mean arm %MVC, it yielded an average RMSE of 10% and $R^{2}$ of 0.72.
{"title":"Neck EMG-Based Estimation of Upper-Limb Muscle Tension","authors":"Arinobu Niijima","doi":"10.1109/LSENS.2026.3652230","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3652230","url":null,"abstract":"Monitoring muscle tension unobtrusively is important for sports and music performance. Conventional approaches estimate bilateral arm exertion by placing electromyography (EMG) sensors on the arms; however, in sports the arms may strike players, risking sensor detachment or damage, and in artistic settings visible arm-mounted sensors may be socially unacceptable. To address these issues, I propose a method that estimates bilateral arm exertion using a single EMG sensor on the posterior neck. The approach leverages the kinetic-chain principle: when the arms tense, cervical muscles coactivate to stabilize posture. Using cervical EMG and machine-learning models, I classify arm tension and regress percent maximum voluntary contraction (%MVC) across both arms. I evaluated the method across five tasks including grip strength, golf putting, and piano playing. The method achieved a mean binary-classification accuracy of 76%. For regression of mean arm %MVC, it yielded an average RMSE of 10% and <inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> of 0.72.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/LSENS.2025.3650621
Philipp Reitz;Tobias Veihelmann;Jonas Bönsch;Norman Franchi;Maximilian Lübke
Low resolution, sparse reflections, and environmental noise limit the reliability of radar-based object detection. This letter presents a you only look once (YOLO)-inspired deep learning model with dual-radar fusion to enhance detection robustness. Range–Doppler maps from two static 60 GHz FMCW radars are processed using a dual-backbone architecture with convolutional block attention module-based attention and a lightweight dynamic weighting module. The system monitors moving humans in a parking garage. At the best operating point, the proposed fusion improves the F1-score from 0.944 (single radar) to 0.962, with precision/recall increasing from 0.930/0.959 to 0.953/0.972. At matched recall ($approx 0.967$), the false positive rate decreases from 0.070 to 0.031, corresponding to a reduction of about 55%. Real-time performance is maintained with inference speeds above 100 FPS on a desktop CPU. These results demonstrate that dual-radar feature fusion enables accurate and efficient radar perception in cluttered environments.
{"title":"Deep Learning-Based Multiradar Fusion for Robust Real-Time Object Detection","authors":"Philipp Reitz;Tobias Veihelmann;Jonas Bönsch;Norman Franchi;Maximilian Lübke","doi":"10.1109/LSENS.2025.3650621","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3650621","url":null,"abstract":"Low resolution, sparse reflections, and environmental noise limit the reliability of radar-based object detection. This letter presents a you only look once (YOLO)-inspired deep learning model with dual-radar fusion to enhance detection robustness. Range–Doppler maps from two static 60 GHz FMCW radars are processed using a dual-backbone architecture with convolutional block attention module-based attention and a lightweight dynamic weighting module. The system monitors moving humans in a parking garage. At the best operating point, the proposed fusion improves the <italic>F</i>1-score from 0.944 (single radar) to 0.962, with precision/recall increasing from 0.930/0.959 to 0.953/0.972. At matched recall (<inline-formula><tex-math>$approx 0.967$</tex-math></inline-formula>), the false positive rate decreases from 0.070 to 0.031, corresponding to a reduction of about 55%. Real-time performance is maintained with inference speeds above 100 FPS on a desktop CPU. These results demonstrate that dual-radar feature fusion enables accurate and efficient radar perception in cluttered environments.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/LSENS.2025.3650702
Tao Cao;Hongfei Cao;Ang Chen;Xinglin Zhang;Shuchen Bai
To enhance the recognition performance of flexible tactile sensing systems in human–computer interaction, this letter proposes a tactile signal recognition method for polyvinylidene fluoride (PVDF) sensor arrays based on time-series imaging. In this study, a 4 × 4 PVDF sensor array was constructed for signal acquisition. The key innovation is the conversion of preprocessed voltage time series into two distinct image representations—Gramian Angular Field (GAF) and Markov Transition Field (MTF) images—to fully exploit the dynamic features of the signals. These images are then fed into a convolutional neural network (CNN) for end-to-end classification. Experimental results demonstrate that the proposed method achieves outstanding performance in contact state recognition, with both the GAF+CNN and MTF+CNN models exceeding 95% in accuracy, precision, recall, and F1-score. The MTF+CNN model shows a slight advantage in recall and F1-score. In terms of deployment, the system achieves millisecond-level single-sample inference latency on a general-purpose computing platform (MacBook Pro 2022, M2 chip), proving its potential for real-time practical applications. This work provides an effective solution for developing high-precision low-latency tactile sensing systems.
为了提高柔性触觉传感系统在人机交互中的识别性能,本文提出了一种基于时间序列成像的PVDF传感器阵列触觉信号识别方法。本研究构建了一个4 × 4 PVDF传感器阵列用于信号采集。关键的创新是将预处理电压时间序列转换为两种不同的图像表示-格拉曼角场(GAF)和马尔可夫过渡场(MTF)图像-以充分利用信号的动态特征。然后将这些图像输入卷积神经网络(CNN)进行端到端分类。实验结果表明,该方法在接触状态识别方面取得了较好的效果,GAF+CNN和MTF+CNN模型的准确率、精密度、召回率和f1分数均超过95%。MTF+CNN模型在召回率和f1得分上有轻微的优势。在部署方面,该系统在通用计算平台(MacBook Pro 2022, M2芯片)上实现了毫秒级的单样本推断延迟,证明了其实时实际应用的潜力。这项工作为开发高精度低延迟触觉传感系统提供了有效的解决方案。
{"title":"Tactile Recognition Using PVDF Sensor Arrays With Time-Series Image Encoding and CNN","authors":"Tao Cao;Hongfei Cao;Ang Chen;Xinglin Zhang;Shuchen Bai","doi":"10.1109/LSENS.2025.3650702","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3650702","url":null,"abstract":"To enhance the recognition performance of flexible tactile sensing systems in human–computer interaction, this letter proposes a tactile signal recognition method for polyvinylidene fluoride (PVDF) sensor arrays based on time-series imaging. In this study, a 4 × 4 PVDF sensor array was constructed for signal acquisition. The key innovation is the conversion of preprocessed voltage time series into two distinct image representations—Gramian Angular Field (GAF) and Markov Transition Field (MTF) images—to fully exploit the dynamic features of the signals. These images are then fed into a convolutional neural network (CNN) for end-to-end classification. Experimental results demonstrate that the proposed method achieves outstanding performance in contact state recognition, with both the GAF+CNN and MTF+CNN models exceeding 95% in accuracy, precision, recall, and F1-score. The MTF+CNN model shows a slight advantage in recall and F1-score. In terms of deployment, the system achieves millisecond-level single-sample inference latency on a general-purpose computing platform (MacBook Pro 2022, M2 chip), proving its potential for real-time practical applications. This work provides an effective solution for developing high-precision low-latency tactile sensing systems.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-05DOI: 10.1109/LSENS.2026.3650795
Aaqib Raza;Mohd Zuki Yusoff
EEG-based motor imagery classification remains constrained by severe intersubject variability, domain shift, and the computational cost of deep models. This study introduces a novel model-agnostic meta-learning (MAML)-based lightweight neural network with a dual-branch domain adaptation pipeline that jointly meta-optimizes task-specific and domain-invariant features. The architectural novelty lies in combining depthwise-separable convolutions, squeeze-and-excitation -attention, and gradient reversal layer-based domain alignment inside an MAML loop, enabling fast adaptation with 40% fewer parameters than conventional CNN pipelines. Evaluated on brain–computer interfaces (BCI)-IV 2a and PhysioNet, the model achieves 81.3% cross-subject accuracy, 76.3% cross-task accuracy, and 79.6% four-class performance, outperforming the state-of-the-art CNN, ATCNet, WST-CNN, and DB-ATCNet baselines. The lightweight design (0.82 M parameters, 0.17 G MACs) enables real-time deployment on Jetson Nano at 38 fps, confirming its suitability for portable and edge BCI applications.
基于脑电图的运动图像分类仍然受到严重的主体间可变性、领域转移和深度模型计算成本的限制。本研究引入了一种新的基于模型不可知元学习(MAML)的轻量级神经网络,该网络具有双分支领域自适应管道,可联合元优化任务特定特征和领域不变特征。该架构的新颖之处在于将深度可分卷积、挤压和激励注意以及基于梯度反转层的域对齐结合在一个MAML回路中,能够以比传统CNN管道少40%的参数实现快速适应。在脑机接口(BCI)-IV 2a和PhysioNet上进行评估,该模型实现了81.3%的跨主题准确率、76.3%的跨任务准确率和79.6%的四类性能,优于最先进的CNN、ATCNet、WST-CNN和DB-ATCNet基线。轻量级设计(0.82 M参数,0.17 G mac)可在Jetson Nano上以38 fps的速度实时部署,确认其适用于便携式和边缘BCI应用。
{"title":"An MAML-Based Lightweight Neural Network With Domain Adaptation for Cross-Subject and Generalized EEG-Based Motor Imagery Classification","authors":"Aaqib Raza;Mohd Zuki Yusoff","doi":"10.1109/LSENS.2026.3650795","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3650795","url":null,"abstract":"EEG-based motor imagery classification remains constrained by severe intersubject variability, domain shift, and the computational cost of deep models. This study introduces a novel model-agnostic meta-learning (MAML)-based lightweight neural network with a dual-branch domain adaptation pipeline that jointly meta-optimizes task-specific and domain-invariant features. The architectural novelty lies in combining depthwise-separable convolutions, squeeze-and-excitation -attention, and gradient reversal layer-based domain alignment inside an MAML loop, enabling fast adaptation with 40% fewer parameters than conventional CNN pipelines. Evaluated on brain–computer interfaces (BCI)-IV 2a and PhysioNet, the model achieves 81.3% cross-subject accuracy, 76.3% cross-task accuracy, and 79.6% four-class performance, outperforming the state-of-the-art CNN, ATCNet, WST-CNN, and DB-ATCNet baselines. The lightweight design (0.82 M parameters, 0.17 G MACs) enables real-time deployment on Jetson Nano at 38 fps, confirming its suitability for portable and edge BCI applications.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Confidence in the use of low-cost sensors (LCS), as a viable alternative to expensive research-grade instruments, is increasing for air quality monitoring due to low-budget and ease of deployment. However, numerous studies have suggested significant variations in their performances under varying environmental conditions, therefore highlighting the need of detailed evaluations and careful calibrations prior to their applications for the region of interest. Such studies have been relatively few in India and particularly lacking in the semiarid urban environments of western India. In this regard, we calibrated LCS for measurements of particle size distribution (OPC-N3) and ozone (O3) (Alphasense OXB4) utilizing reference-grade measurements (GRIMM, Thermo), and have evaluated the performance of LCS over Ahmedabad. For computing PM2.5, the corrections have been derived from the particle mass size distribution, which improved the accuracy significantly compared to the reference measurements (R2 ∼ 0.7, normalized mean absolute bias ∼ 29% ). O3 variability is calibrated using reference O3 and sensor-measured temperature and relative humidity, with the aid of machine learning. Measurements from the two O3 sensors showed good intercorrelation and agreement with the reference (R2 ∼ 0.7). Our study fills a gap of calibration and performance evaluation of LCSs in a distinct urban environment of western India and highlights the need for careful corrections in order to have reliable air quality measurements. LCS-based measurements were found to capture typical features of the urban air quality in this region, and therefore can be deployed to quantify trends and to understand the important factors governing aerosols and O3. The study can serve as a reference for future developments toward the low-cost comprehensive measurements of atmospheric composition, including other key air pollutants.
{"title":"Calibration and Performance Evaluation of Low-Cost Air Quality Sensors in an Urban Environment of Western India","authors":"Yash Dahima;Tapaswini Sarangi;Yogeshkumar Patel;Lokesh Kumar Sahu;Narendra Ojha;Aditya Vaishya","doi":"10.1109/LSENS.2026.3650975","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3650975","url":null,"abstract":"Confidence in the use of low-cost sensors (LCS), as a viable alternative to expensive research-grade instruments, is increasing for air quality monitoring due to low-budget and ease of deployment. However, numerous studies have suggested significant variations in their performances under varying environmental conditions, therefore highlighting the need of detailed evaluations and careful calibrations prior to their applications for the region of interest. Such studies have been relatively few in India and particularly lacking in the semiarid urban environments of western India. In this regard, we calibrated LCS for measurements of particle size distribution (OPC-N3) and ozone (O<sub>3</sub>) (Alphasense OXB4) utilizing reference-grade measurements (GRIMM, Thermo), and have evaluated the performance of LCS over Ahmedabad. For computing PM<sub>2.5</sub>, the corrections have been derived from the particle mass size distribution, which improved the accuracy significantly compared to the reference measurements (R<sup>2</sup> ∼ 0.7, normalized mean absolute bias ∼ 29% ). O<sub>3</sub> variability is calibrated using reference O<sub>3</sub> and sensor-measured temperature and relative humidity, with the aid of machine learning. Measurements from the two O<sub>3</sub> sensors showed good intercorrelation and agreement with the reference (R<sup>2</sup> ∼ 0.7). Our study fills a gap of calibration and performance evaluation of LCSs in a distinct urban environment of western India and highlights the need for careful corrections in order to have reliable air quality measurements. LCS-based measurements were found to capture typical features of the urban air quality in this region, and therefore can be deployed to quantify trends and to understand the important factors governing aerosols and O<sub>3</sub>. The study can serve as a reference for future developments toward the low-cost comprehensive measurements of atmospheric composition, including other key air pollutants.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.1109/LSENS.2025.3650141
Vikas Kumar;Shivansh Awasthi;Vikash Sharma;Santosh Parajuli;Thomas George Thundat;Ankur Gupta
This letter proposes resonance-based sensing on large surfaces using 13.56 MHz RF excitation to enhance human safety in human–machine interaction (HMI). In this approach, whole large conducting surface itself functions as a sensor by varying its resonance characteristics in response to object interactions. The sensing system is compact in design and optimized for operation in the industrial, scientific, and medical band. The surface is excited using a single conductor powered resonant coil, which provides good passive voltage gain at −16 dBm input power. A secondary resonant coil is employed to amplify voltage variations caused by human touch on the surface. The system delivers a high voltage output signal that can be directly interfaced with a computing platform through an analog to digital converter, thereby eliminating the need for external amplifiers or analog filters. This technique offers a cost effective, low power, and large area sensing solution for human robot collaboration. The proposed system can be seamlessly integrated into machines or robotic platforms for sensing applications in HMI.
{"title":"Resonance-Based Sensing on Large Surfaces Using RF Excitation for Human–Machine Interaction","authors":"Vikas Kumar;Shivansh Awasthi;Vikash Sharma;Santosh Parajuli;Thomas George Thundat;Ankur Gupta","doi":"10.1109/LSENS.2025.3650141","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3650141","url":null,"abstract":"This letter proposes resonance-based sensing on large surfaces using 13.56 MHz RF excitation to enhance human safety in human–machine interaction (HMI). In this approach, whole large conducting surface itself functions as a sensor by varying its resonance characteristics in response to object interactions. The sensing system is compact in design and optimized for operation in the industrial, scientific, and medical band. The surface is excited using a single conductor powered resonant coil, which provides good passive voltage gain at −16 dBm input power. A secondary resonant coil is employed to amplify voltage variations caused by human touch on the surface. The system delivers a high voltage output signal that can be directly interfaced with a computing platform through an analog to digital converter, thereby eliminating the need for external amplifiers or analog filters. This technique offers a cost effective, low power, and large area sensing solution for human robot collaboration. The proposed system can be seamlessly integrated into machines or robotic platforms for sensing applications in HMI.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1109/LSENS.2025.3649802
Soongyu Kang;Yongchul Jung;Sewoon Oh;Yunho Jung
In this letter, we propose a prognostics and health management (PHM) method for permanent magnet synchronous motors (PMSMs) in urban air mobility. The method uses a Transformer and three-phase current sensors. The short-time Fourier transform is employed for current signals to capture fault-related information. The Transformer architecture effectively extracts both local and global features from time–frequency representations, enabling high classification performance. However, its high computational cost and large model size hinder deployment in practical industrial applications. To address this challenge, we designed a lightweight binary-weighted Transformer (BWT) for PMSM PHM, reducing the model size to 5.5% of the baseline. The proposed BWT achieves 99.81% classification accuracy across four classes. We also developed a hardware accelerator for matrix multiplication—the most time-consuming operation in BWT—and implemented it on a field-programmable gate array. The proposed SW/HW co-design achieved an 85.55× speedup over the software-only implementation on the ARM microprocessor unit.
{"title":"FPGA Implementation of Binary-Weighted Transformer for Prognostics and Health Management of Permanent Magnet Synchronous Motors Using Current Sensors","authors":"Soongyu Kang;Yongchul Jung;Sewoon Oh;Yunho Jung","doi":"10.1109/LSENS.2025.3649802","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3649802","url":null,"abstract":"In this letter, we propose a prognostics and health management (PHM) method for permanent magnet synchronous motors (PMSMs) in urban air mobility. The method uses a Transformer and three-phase current sensors. The short-time Fourier transform is employed for current signals to capture fault-related information. The Transformer architecture effectively extracts both local and global features from time–frequency representations, enabling high classification performance. However, its high computational cost and large model size hinder deployment in practical industrial applications. To address this challenge, we designed a lightweight binary-weighted Transformer (BWT) for PMSM PHM, reducing the model size to 5.5% of the baseline. The proposed BWT achieves 99.81% classification accuracy across four classes. We also developed a hardware accelerator for matrix multiplication—the most time-consuming operation in BWT—and implemented it on a field-programmable gate array. The proposed SW/HW co-design achieved an 85.55× speedup over the software-only implementation on the ARM microprocessor unit.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This letter presents a novel, integrated multimodal data acquisition system for simultaneously capturing the six degrees of freedom motion data and real-time visual information, designed primarily for ultrasound imaging applications. The system combines inertial measurement units, ArUco markers, and a video capture device to achieve high-accuracy motion tracking synchronized with real-time ultrasound imaging. Our approach provides a cost-effective and portable setup that is capable of recording accurate translational and rotational data. The experimental results demonstrate promising results with $1.95pm text{1.10},text{mm}$ deviation with the path driven by the robotic manipulation, which can be further improved by controlling acceleration and velocity. Its real-time performance, ease of use, and potential for AI model training make it valuable for various medical applications, including ultrasound-guided procedures and motion analysis.
{"title":"An Integrated Multimodal Data Acquisition System for Ultrasound Imaging","authors":"Gajendra Singh;Deepak Mishra;Jayant Kumar Mohanta;Rengarajan Rajagopal;Rahul Choudhary;Alok Kumar Sharma;Pushpinder Singh Khera","doi":"10.1109/LSENS.2025.3649236","DOIUrl":"https://doi.org/10.1109/LSENS.2025.3649236","url":null,"abstract":"This letter presents a novel, integrated multimodal data acquisition system for simultaneously capturing the six degrees of freedom motion data and real-time visual information, designed primarily for ultrasound imaging applications. The system combines inertial measurement units, ArUco markers, and a video capture device to achieve high-accuracy motion tracking synchronized with real-time ultrasound imaging. Our approach provides a cost-effective and portable setup that is capable of recording accurate translational and rotational data. The experimental results demonstrate promising results with <inline-formula><tex-math>$1.95pm text{1.10},text{mm}$</tex-math></inline-formula> deviation with the path driven by the robotic manipulation, which can be further improved by controlling acceleration and velocity. Its real-time performance, ease of use, and potential for AI model training make it valuable for various medical applications, including ultrasound-guided procedures and motion analysis.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 2","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}