Early disease diagnosis has significant benefits in saving human lives by detecting various biomarkers existing in various organs, such as the liver, heart, and kidney. Therefore, point-of-care diagnostic devices are the need of the hour. Acute myocardial infarction (AMI) is a condition in which reduced or obstructed coronary blood flow leads to oxygen deprivation, ultimately causing irreversible damage to cardiac tissue. Out of the major cardiac biomarkers, myoglobin (Mb) has higher concentrations in blood; therefore, it is important to detect Mb for cardiac conditions. There exist various approaches for detecting myoglobin, yet direct resistive approach is yet to be explored. Hence, this present work reports a novel way of detecting cardiac biomarker myoglobin, by developing a flexible and cost-effective graphene-based resistor named here as Graphene BioResistor (GBR). The GBR masquerades antibodies of specific biomarkers on its surface to grasp only the antigen of its own kind providing a highly selective way of detection of cardiac biomarkers. The device analyzes the behavior of the concentration of analytes to the resistance of the sensor. The device is cost-effective, flexible, and user-friendly because of its ease of fabrication and customizable surface properties. The multilayer porous 3-D graphene surface provides the platform for the bioanalyte to settle on the pores with impressive stability. The signal to noise ratio of the GBR is found to be 8.48. The limit of detection and limit of quantification of the device are 27.96 and 53.79 ng/ml, respectively, which are well within the ranges for AMI detection. The reproducibility of GBR at a certain concentration is found to be 99.19% and repeatability is at 80.1%. This fabrication process of GBR can be utilized to detect several other biomarkers present in human body at a very minimal cost and ease of fabrication.
{"title":"Graphene BioResistor (GBR): A Resistive Sensing Approach for the Detection of Myoglobin","authors":"Mohsina Afrooz;Sayan Das;Sumeet Walia;Aaron Elbourne;Paul Ramsland;Sanket Goel","doi":"10.1109/LSENS.2026.3659511","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3659511","url":null,"abstract":"Early disease diagnosis has significant benefits in saving human lives by detecting various biomarkers existing in various organs, such as the liver, heart, and kidney. Therefore, point-of-care diagnostic devices are the need of the hour. Acute myocardial infarction (AMI) is a condition in which reduced or obstructed coronary blood flow leads to oxygen deprivation, ultimately causing irreversible damage to cardiac tissue. Out of the major cardiac biomarkers, myoglobin (Mb) has higher concentrations in blood; therefore, it is important to detect Mb for cardiac conditions. There exist various approaches for detecting myoglobin, yet direct resistive approach is yet to be explored. Hence, this present work reports a novel way of detecting cardiac biomarker myoglobin, by developing a flexible and cost-effective graphene-based resistor named here as Graphene BioResistor (GBR). The GBR masquerades antibodies of specific biomarkers on its surface to grasp only the antigen of its own kind providing a highly selective way of detection of cardiac biomarkers. The device analyzes the behavior of the concentration of analytes to the resistance of the sensor. The device is cost-effective, flexible, and user-friendly because of its ease of fabrication and customizable surface properties. The multilayer porous 3-D graphene surface provides the platform for the bioanalyte to settle on the pores with impressive stability. The signal to noise ratio of the GBR is found to be 8.48. The limit of detection and limit of quantification of the device are 27.96 and 53.79 ng/ml, respectively, which are well within the ranges for AMI detection. The reproducibility of GBR at a certain concentration is found to be 99.19% and repeatability is at 80.1%. This fabrication process of GBR can be utilized to detect several other biomarkers present in human body at a very minimal cost and ease of fabrication.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 4","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1109/LSENS.2026.3659157
Anand Mohan;Ramnivas Sharma;Hemant Kumar Meena
Electroencephalogram (EEG)-based facial expression recognition plays an important role in affective computing and brain–computer interface systems; however, it is conceptually distinct from general emotion recognition. EEG signals recorded during facial expressions are strongly influenced by motor and sensorimotor activity associated with facial movements, whereas emotion recognition aims to decode internally generated affective states arising from distributed limbic–cortical networks. At sensor level, EEG signals inherently exhibit low signal-to-noise ratio, high temporal variability, and nonlinear spatial dependencies across electrodes, which further degrade the reliability of affective decoding. Conventional feature extraction techniques capture local time–frequency information but fail to preserve interelectrode spatial topology, leading to poor generalization across subjects and sessions and limiting real-time embedded deployment. To address these challenges, this work introduces a graph signal processing-based Laplacian energy (LE) feature extraction framework integrated with lightweight machine learning classifiers, explicitly modeling spatial–topological dependencies among EEG channels and enabling efficient, interpretable, and real-time affective state recognition across multiple frequency bands. EEG features are classified using random forest, support vector machine, decision tree, logistic regression, K-nearest neighbors, and light gradient boosting machine, achieving 100% accuracy with cross-validation mean accuracy above 99.89%. Implemented on the field-programmable gate array (FPGA) python productivity for Zynq UltraScale+MPSoCs (PYNQ-ZU) platform, the system demonstrates 6–8 mW power consumption and submillisecond latency. In contrast, the proposed LE + LR model achieves 100% accuracy with only 0.007 W power and 0.08 ms latency—representing a 20×–500× gain in power efficiency and a 10×–2000× latency reduction over existing FPGA-based methods.
{"title":"Real-Time EEG-Based Facial Expression Recognition Using Laplacian Energy Features on FPGA","authors":"Anand Mohan;Ramnivas Sharma;Hemant Kumar Meena","doi":"10.1109/LSENS.2026.3659157","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3659157","url":null,"abstract":"Electroencephalogram (EEG)-based facial expression recognition plays an important role in affective computing and brain–computer interface systems; however, it is conceptually distinct from general emotion recognition. EEG signals recorded during facial expressions are strongly influenced by motor and sensorimotor activity associated with facial movements, whereas emotion recognition aims to decode internally generated affective states arising from distributed limbic–cortical networks. At sensor level, EEG signals inherently exhibit low signal-to-noise ratio, high temporal variability, and nonlinear spatial dependencies across electrodes, which further degrade the reliability of affective decoding. Conventional feature extraction techniques capture local time–frequency information but fail to preserve interelectrode spatial topology, leading to poor generalization across subjects and sessions and limiting real-time embedded deployment. To address these challenges, this work introduces a graph signal processing-based Laplacian energy (LE) feature extraction framework integrated with lightweight machine learning classifiers, explicitly modeling spatial–topological dependencies among EEG channels and enabling efficient, interpretable, and real-time affective state recognition across multiple frequency bands. EEG features are classified using random forest, support vector machine, decision tree, logistic regression, K-nearest neighbors, and light gradient boosting machine, achieving 100% accuracy with cross-validation mean accuracy above 99.89%. Implemented on the field-programmable gate array (FPGA) python productivity for Zynq UltraScale+MPSoCs (PYNQ-ZU) platform, the system demonstrates 6–8 mW power consumption and submillisecond latency. In contrast, the proposed LE + LR model achieves 100% accuracy with only 0.007 W power and 0.08 ms latency—representing a 20×–500× gain in power efficiency and a 10×–2000× latency reduction over existing FPGA-based methods.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Camera sensors often struggle to capture images in low-light conditions, leading to reduced brightness, contrast, and color fidelity, and increased noise that degrades the performance. Many methods have emerged for image enhancement but they often require slow processing and blur image, making them imperfect for real-world scenarios. This letter presents the first-ever Y2O3-based transmission gate memristor comparator-based median filter for on-sensor image enhancement in biomedical imaging systems, such as X-ray, computed tomography (CT), and magnetic resonance imaging, designed using Verilog-A. The current system performs front-end noise suppression directly at the sensor output stage, effectively removing salt-and-pepper noise that is introduced during signal acquisition from sensors. The denoised images were reconstructed in MATLAB, and performance was evaluated using quality assessment metrics such as peak signal-to-noise ratio (PSNR), mean squared error (MSE), and mean absolute error (MAE). The proposed filter demonstrated superior performance compared to traditional methods, such as adaptive median filter, switch median, and threshold and weighted median filter, achieving PSNR values of 46.36 dB for brain CT and 43.84 dB for COVID-19 X-ray, alongside reduced MSE and MAE values of 1.5 and 29.53 for brain CT and 2.67 and 43.84 for COVID-19 X-ray, respectively. The findings indicate the potential of memristor-based filters for next-generation biomedical sensors.
{"title":"Integrating Memristor-Based Median Filtering at the Sensor Front End for Biomedical Image Enhancement","authors":"Lokesh Kumar Hindoliya;Mangal Das;Kumari Jyoti;Animesh Paul;Mohit Kumar;Saurabh Yadav;Ram Bilas Pachori;Shaibal Mukherjee","doi":"10.1109/LSENS.2026.3658617","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3658617","url":null,"abstract":"Camera sensors often struggle to capture images in low-light conditions, leading to reduced brightness, contrast, and color fidelity, and increased noise that degrades the performance. Many methods have emerged for image enhancement but they often require slow processing and blur image, making them imperfect for real-world scenarios. This letter presents the first-ever Y<sub>2</sub>O<sub>3</sub>-based transmission gate memristor comparator-based median filter for on-sensor image enhancement in biomedical imaging systems, such as X-ray, computed tomography (CT), and magnetic resonance imaging, designed using Verilog-A. The current system performs front-end noise suppression directly at the sensor output stage, effectively removing salt-and-pepper noise that is introduced during signal acquisition from sensors. The denoised images were reconstructed in MATLAB, and performance was evaluated using quality assessment metrics such as peak signal-to-noise ratio (PSNR), mean squared error (MSE), and mean absolute error (MAE). The proposed filter demonstrated superior performance compared to traditional methods, such as adaptive median filter, switch median, and threshold and weighted median filter, achieving PSNR values of 46.36 dB for brain CT and 43.84 dB for COVID-19 X-ray, alongside reduced MSE and MAE values of 1.5 and 29.53 for brain CT and 2.67 and 43.84 for COVID-19 X-ray, respectively. The findings indicate the potential of memristor-based filters for next-generation biomedical sensors.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LSENS.2026.3658326
Yanyan Shi;Dongyang Wang;Luanjun Wang;Meng Wang;Feng Fu
Identification of lesion in the lung through sensors is of great importance. In this letter, a new method based on a hybrid temporal convolutional network (TCN)–bidirectional long short-term memory network (BiLSTM) model is proposed to identify the lesion with electrical impedance tomography (EIT). Unlike traditional methods that rely on reconstructed images, the process of image reconstruction is avoided in the proposed method. To differentiate subtle voltage variations in the boundary measurement by sensors between different types, the measured voltage data are processed by multiscale feature extraction and bidirectional temporal modeling. The performance of the proposed method is compared to that of TCN–LSTM and TCN models. The results show that the identification accuracy reaches 99% under noise-free conditions and is higher than 97% at signal-to-noise ratio of 40 dB, outperforming the comparison models. This approach provides an alternative for lesion detection in the lung with EIT.
{"title":"A Hybrid Deep Learning Method for Lesion Identification With Electrical Impedance Tomography","authors":"Yanyan Shi;Dongyang Wang;Luanjun Wang;Meng Wang;Feng Fu","doi":"10.1109/LSENS.2026.3658326","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3658326","url":null,"abstract":"Identification of lesion in the lung through sensors is of great importance. In this letter, a new method based on a hybrid temporal convolutional network (TCN)–bidirectional long short-term memory network (BiLSTM) model is proposed to identify the lesion with electrical impedance tomography (EIT). Unlike traditional methods that rely on reconstructed images, the process of image reconstruction is avoided in the proposed method. To differentiate subtle voltage variations in the boundary measurement by sensors between different types, the measured voltage data are processed by multiscale feature extraction and bidirectional temporal modeling. The performance of the proposed method is compared to that of TCN–LSTM and TCN models. The results show that the identification accuracy reaches 99% under noise-free conditions and is higher than 97% at signal-to-noise ratio of 40 dB, outperforming the comparison models. This approach provides an alternative for lesion detection in the lung with EIT.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1109/LSENS.2026.3658427
Gang Zhao;Liwen Chen;Liangpeng Gao;Xiaochun Cheng
To address the challenges of degraded positioning accuracy, drift, or complete failure in environments where satellite signals are obstructed (e.g., basements, tunnels, canyons, forests, mountainous regions, and urban high-rise buildings), this letter proposes a navigation and positioning algorithm for complex terrains by integrating pseudolites with time difference of arrival and trilateration techniques. First, to enhance the antiinterference capability and positioning accuracy of low-cost satellite receivers in conventional integrated navigation systems, we improve robustness and precision through the fusion of global navigation satellite system (GNSS) and inertial measurement unit (IMU) data. At the front-end processing stage, the algorithm calculates the relative positions and time differences between multiple pseudolites and receivers while integrating absolute position data derived from trilateration for state estimation, thereby providing accurate initial pose initialization for the back-end module. Subsequently, the back end employs an extended Kalman filter to fuse data from wheel odometry, GNSS, and IMU, optimizing the algorithm's accuracy and global consistency. Finally, the proposed algorithm is validated in high-dynamic motion scenarios and a comprehensive campus environment. Experimental results demonstrate that, compared to mainstream GNSS/IMU fusion methods and LiDAR-based simultaneous localization and mapping algorithms, the proposed algorithm achieves superior positioning accuracy (with a root-mean-square error reduction of 58%–72% in occluded scenarios) and exhibits enhanced robustness in aggressive motion conditions.
{"title":"A Pseudolite-Aided Navigation and Positioning Method for Complex Terrain Environments","authors":"Gang Zhao;Liwen Chen;Liangpeng Gao;Xiaochun Cheng","doi":"10.1109/LSENS.2026.3658427","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3658427","url":null,"abstract":"To address the challenges of degraded positioning accuracy, drift, or complete failure in environments where satellite signals are obstructed (e.g., basements, tunnels, canyons, forests, mountainous regions, and urban high-rise buildings), this letter proposes a navigation and positioning algorithm for complex terrains by integrating pseudolites with time difference of arrival and trilateration techniques. First, to enhance the antiinterference capability and positioning accuracy of low-cost satellite receivers in conventional integrated navigation systems, we improve robustness and precision through the fusion of global navigation satellite system (GNSS) and inertial measurement unit (IMU) data. At the front-end processing stage, the algorithm calculates the relative positions and time differences between multiple pseudolites and receivers while integrating absolute position data derived from trilateration for state estimation, thereby providing accurate initial pose initialization for the back-end module. Subsequently, the back end employs an extended Kalman filter to fuse data from wheel odometry, GNSS, and IMU, optimizing the algorithm's accuracy and global consistency. Finally, the proposed algorithm is validated in high-dynamic motion scenarios and a comprehensive campus environment. Experimental results demonstrate that, compared to mainstream GNSS/IMU fusion methods and LiDAR-based simultaneous localization and mapping algorithms, the proposed algorithm achieves superior positioning accuracy (with a root-mean-square error reduction of 58%–72% in occluded scenarios) and exhibits enhanced robustness in aggressive motion conditions.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146116929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-27DOI: 10.1109/LSENS.2026.3658024
Sevendi Eldrige Rifki Poluan;Yan-Ann Chen
In privacy-sensitive scenarios where traditional biometric cues, such as faces or voices, are unavailable, personal identification becomes a significant challenge. This work presents a cross-modal approach that combines lower body skeletal data, captured by an RGB camera, with foot pressure distributions collected from smart insoles. The use of these nonintrusive sensors enables identity recognition without compromising user privacy. The two modalities are encoded into a unified three-channel image and processed using a deep neural architecture that integrates a pretrained VGG16 and a long short-term memory network to learn cross-modal similarity. Identity matching is formulated as a bipartite graph problem, where similarity scores guide the pairing of anonymous skeletal data with ID-tagged insole readings. Experiments show enhanced performance, with pairing accuracy rising from 76.4% to 90.4%, and user identification rates doubling compared to a baseline K-nearest neighbors under long-duration monitoring.
{"title":"Cross-Modal Matching of Lower Body Skeleton and Insole Pressure for Identity Recognition","authors":"Sevendi Eldrige Rifki Poluan;Yan-Ann Chen","doi":"10.1109/LSENS.2026.3658024","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3658024","url":null,"abstract":"In privacy-sensitive scenarios where traditional biometric cues, such as faces or voices, are unavailable, personal identification becomes a significant challenge. This work presents a cross-modal approach that combines lower body skeletal data, captured by an RGB camera, with foot pressure distributions collected from smart insoles. The use of these nonintrusive sensors enables identity recognition without compromising user privacy. The two modalities are encoded into a unified three-channel image and processed using a deep neural architecture that integrates a pretrained VGG16 and a long short-term memory network to learn cross-modal similarity. Identity matching is formulated as a bipartite graph problem, where similarity scores guide the pairing of anonymous skeletal data with ID-tagged insole readings. Experiments show enhanced performance, with pairing accuracy rising from 76.4% to 90.4%, and user identification rates doubling compared to a baseline K-nearest neighbors under long-duration monitoring.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1109/LSENS.2026.3657973
Prajjwal Shukla;Rahul Gond;Brajesh Rawat
In this letter, we report the flexible MoS$_{2}$/valinomycin-based sensor for in situ $rm {K^+}$ detection in the soil sample. The fabricated sensor, realized on a flexible PET substrate using scalable ink-dispensing techniques, exhibits a wide detection range of 1–100 mM with high linearity ($R^{2}$ = 0.9975) and sensitivities of 5.6 $mu$A/mM in analyte solution and 2.1 $mu$A/mM in soil sample. More importantly, cyclic voltammetry analysis reveals stable and reversible oxidation–reduction behavior across repeated cycles, with excellent reproducibility in multiple sensor replicas. The fabricated sensor uniquely combines soil compatibility, flexibility, reproducibility, and cost-effective fabrication, which addresses the critical gap between laboratory sensing technologies and field-deployable soil nutrient monitoring. These results establish the MoS$_{2}$/valinomycin sensor as a robust and scalable platform for precision agriculture, with the potential to advance real-time nutrient management and promote sustainable farming practices.
在这封信中,我们报道了基于MoS$_{2}$/valinomycin的柔性传感器,用于土壤样品中的原位$rm {K^+}$检测。该传感器采用可扩展的油墨点胶技术在柔性PET基板上实现,具有1-100 mM的宽检测范围,具有高线性度($R^{2}$ = 0.9975),在分析物溶液中灵敏度为5.6 $mu$ a /mM,在土壤样品中灵敏度为2.1 $mu$ a /mM。更重要的是,循环伏安法分析揭示了在重复循环中稳定和可逆的氧化还原行为,在多个传感器副本中具有出色的再现性。该传感器独特地结合了土壤兼容性、灵活性、可重复性和成本效益,解决了实验室传感技术与现场可部署土壤养分监测之间的关键差距。这些结果表明,MoS$_{2}$/valinomycin传感器是精准农业的一个强大且可扩展的平台,具有推进实时营养管理和促进可持续农业实践的潜力。
{"title":"Flexible MoS$_{2}$-Based Ion-Selective Sensor With Valinomycin Membrane for In Situ Detection of Soil Potassium (K$^+$) Ions","authors":"Prajjwal Shukla;Rahul Gond;Brajesh Rawat","doi":"10.1109/LSENS.2026.3657973","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3657973","url":null,"abstract":"In this letter, we report the flexible MoS<inline-formula><tex-math>$_{2}$</tex-math></inline-formula>/valinomycin-based sensor for in situ <inline-formula><tex-math>$rm {K^+}$</tex-math></inline-formula> detection in the soil sample. The fabricated sensor, realized on a flexible PET substrate using scalable ink-dispensing techniques, exhibits a wide detection range of 1–100 mM with high linearity (<inline-formula><tex-math>$R^{2}$</tex-math></inline-formula> = 0.9975) and sensitivities of 5.6 <inline-formula><tex-math>$mu$</tex-math></inline-formula>A/mM in analyte solution and 2.1 <inline-formula><tex-math>$mu$</tex-math></inline-formula>A/mM in soil sample. More importantly, cyclic voltammetry analysis reveals stable and reversible oxidation–reduction behavior across repeated cycles, with excellent reproducibility in multiple sensor replicas. The fabricated sensor uniquely combines soil compatibility, flexibility, reproducibility, and cost-effective fabrication, which addresses the critical gap between laboratory sensing technologies and field-deployable soil nutrient monitoring. These results establish the MoS<inline-formula><tex-math>$_{2}$</tex-math></inline-formula>/valinomycin sensor as a robust and scalable platform for precision agriculture, with the potential to advance real-time nutrient management and promote sustainable farming practices.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LSENS.2026.3657441
Zhenjie Wang;Bhaskar Choubey
This letter presents a compact active quenching and recharge (AQR) circuit for single-photon avalanche diode (SPAD) pixels targeting high-speed 3-D integration. Implemented in a 180 nm complementary metal-oxide-semiconductor (CMOS) process, the proposed AQR requires only 12 transistors, occupies an area of 12 µm × 9 µm. The pixel layout includes a dedicated passivation opening for future low-cost SPAD deposition via plasma-enhanced chemical vapor deposition, achieving a pixel size of 23 µm × 23 µm with a fill factor of approximately 43%. Electrical characterization using field-programmable gate array (FPGA)-based tristate excitation and a CMOS SPAD confirms correct quenching and recharge behavior, with externally tunable dead time for system-level flexibility. Compared with recent state-of-the-art implementations, the proposed design demonstrates a smaller area and faster simulated response, indicating its potential for large-scale SPAD arrays and future 3D-integrated imaging systems.
这封信提出了一个紧凑的有源淬火和充电(AQR)电路,用于单光子雪崩二极管(SPAD)像素,目标是高速三维集成。采用180 nm互补金属氧化物半导体(CMOS)工艺实现的AQR仅需要12个晶体管,占地12 μ m × 9 μ m。像素布局包括一个专用的钝化开口,用于未来通过等离子体增强化学气相沉积的低成本SPAD沉积,实现像素尺寸为23 μ m × 23 μ m,填充系数约为43%。使用基于现场可编程门阵列(FPGA)的三态激励和CMOS SPAD的电气特性确定了正确的淬火和充电行为,并具有外部可调的死区时间,以实现系统级灵活性。与最近最先进的实现相比,所提出的设计展示了更小的面积和更快的模拟响应,表明其在大规模SPAD阵列和未来3d集成成像系统中的潜力。
{"title":"A Compact Active Quenching and Recharge Circuit for 3D-Integrated SPAD Pixels","authors":"Zhenjie Wang;Bhaskar Choubey","doi":"10.1109/LSENS.2026.3657441","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3657441","url":null,"abstract":"This letter presents a compact active quenching and recharge (AQR) circuit for single-photon avalanche diode (SPAD) pixels targeting high-speed 3-D integration. Implemented in a 180 nm complementary metal-oxide-semiconductor (CMOS) process, the proposed AQR requires only 12 transistors, occupies an area of 12 µm × 9 µm. The pixel layout includes a dedicated passivation opening for future low-cost SPAD deposition via plasma-enhanced chemical vapor deposition, achieving a pixel size of 23 µm × 23 µm with a fill factor of approximately 43%. Electrical characterization using field-programmable gate array (FPGA)-based tristate excitation and a CMOS SPAD confirms correct quenching and recharge behavior, with externally tunable dead time for system-level flexibility. Compared with recent state-of-the-art implementations, the proposed design demonstrates a smaller area and faster simulated response, indicating its potential for large-scale SPAD arrays and future 3D-integrated imaging systems.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LSENS.2026.3657103
Gobinath Kaliyaperumal;Karthick P A
Prolonged use of electronic gadgets and a sedentary lifestyle lead to overuse of neck and shoulder muscles, which initially results in fatigue, and can later develop into musculoskeletal disorders (MSDs). Therefore, muscle fatigue is considered an important precursor to MSD. Surface electromyography (sEMG) is widely used for fatigue assessment; however, its analysis is challenging due to its multicomponent and nonstationary behavior. Moreover, the time-varying frequency characteristics of neck and shoulder muscles are not established well under dynamic contractions. In this letter, the nonstationary characteristics of neck and shoulder sEMG are analyzed using the superlet transform (SLT) and a hybrid lightweight convolutional neural network (CNN)-extreme gradient boosting (XGBoost) algorithm. For this purpose, wireless sEMG signals were collected from sternocleidomastoid, splenius capitis, and trapezius muscles of 50 healthy volunteers using a standard protocol. The first, middle, and last ten seconds of the signals are considered as nonfatigue, transition, and fatigue zone, respectively. The signals were preprocessed and subjected to SLT for analyzing the time-varying frequency components. Four features, namely, median frequency, mean frequency, instantaneous frequency, and energy, were extracted and used to design hybrid lightweight CNN-XGBoost model. The results show that the proposed SLT effectively represents the time–frequency variations of signals. All features are found to be distinct across the three conditions in all muscles (p < 0.05). Importantly, the proposed lightweight model detects the fatiguing contractions with an overall accuracy of 90.8% and an F1-score of 90.2%. These findings suggest that the combination of advanced time–frequency approach, SLT, and a lightweight CNN-XGBoost could be useful for real-time monitoring aimed at preventing MSDs.
{"title":"Detection of Neck and Shoulder Muscle Fatiguing Contractions Using Superlet Transform of Wireless Electromyography Measurements and Lightweight CNN","authors":"Gobinath Kaliyaperumal;Karthick P A","doi":"10.1109/LSENS.2026.3657103","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3657103","url":null,"abstract":"Prolonged use of electronic gadgets and a sedentary lifestyle lead to overuse of neck and shoulder muscles, which initially results in fatigue, and can later develop into musculoskeletal disorders (MSDs). Therefore, muscle fatigue is considered an important precursor to MSD. Surface electromyography (sEMG) is widely used for fatigue assessment; however, its analysis is challenging due to its multicomponent and nonstationary behavior. Moreover, the time-varying frequency characteristics of neck and shoulder muscles are not established well under dynamic contractions. In this letter, the nonstationary characteristics of neck and shoulder sEMG are analyzed using the superlet transform (SLT) and a hybrid lightweight convolutional neural network (CNN)-extreme gradient boosting (XGBoost) algorithm. For this purpose, wireless sEMG signals were collected from sternocleidomastoid, splenius capitis, and trapezius muscles of 50 healthy volunteers using a standard protocol. The first, middle, and last ten seconds of the signals are considered as nonfatigue, transition, and fatigue zone, respectively. The signals were preprocessed and subjected to SLT for analyzing the time-varying frequency components. Four features, namely, median frequency, mean frequency, instantaneous frequency, and energy, were extracted and used to design hybrid lightweight CNN-XGBoost model. The results show that the proposed SLT effectively represents the time–frequency variations of signals. All features are found to be distinct across the three conditions in all muscles (<italic>p</i> < 0.05). Importantly, the proposed lightweight model detects the fatiguing contractions with an overall accuracy of 90.8% and an F1-score of 90.2%. These findings suggest that the combination of advanced time–frequency approach, SLT, and a lightweight CNN-XGBoost could be useful for real-time monitoring aimed at preventing MSDs.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/LSENS.2026.3657220
Sudip Modak;Suman Halder;Soumya Chatterjee
This letter presents a novel electroencephalography (EEG) rhythm-based motor imagery (MI) classification framework employing multilayer weighted visibility graph (WVG) and deep learning. In this study, multichannel EEG signals corresponding to different MI classes were acquired from various subjects by placing sensors on the scalp. The acquired EEG signals were initially decomposed into five frequency subbands (EEG rhythms), by segmenting into 2-s overlapping windows. For each rhythm, a multilayer functional brain network has been constructed utilizing WVG, and the resulting functional brain connectivity matrices were mapped into RGB images, which were further classified through a lightweight custom ConvNeXt model. In this work, subject independent evaluation was performed using a leave-one-subject-out protocol with stratified tenfold cross-validation. Experimental validation on two datasets, BCI Competition IV-2a and High Gamma Dataset (HGD), yielded accuracies of 90.20% and 96.1%, respectively. The analysis reveals physiological meaningful connectivity patterns, such as contralateral β-band connectivity in BCI IV-2a and enhanced interhemispheric γ-band integration in HGD. Ablation studies and benchmark comparison confirmed that the proposed framework achieved high classification accuracy, demonstrating the efficiency and potential for robust, real-time subject-independent MI–BCI applications.
{"title":"A Novel Multilayer Functional Brain Connectivity-Based Motor Imagery Classification Model Using EEG Sensor Data","authors":"Sudip Modak;Suman Halder;Soumya Chatterjee","doi":"10.1109/LSENS.2026.3657220","DOIUrl":"https://doi.org/10.1109/LSENS.2026.3657220","url":null,"abstract":"This letter presents a novel electroencephalography (EEG) rhythm-based motor imagery (MI) classification framework employing multilayer weighted visibility graph (WVG) and deep learning. In this study, multichannel EEG signals corresponding to different MI classes were acquired from various subjects by placing sensors on the scalp. The acquired EEG signals were initially decomposed into five frequency subbands (EEG rhythms), by segmenting into 2-s overlapping windows. For each rhythm, a multilayer functional brain network has been constructed utilizing WVG, and the resulting functional brain connectivity matrices were mapped into RGB images, which were further classified through a lightweight custom ConvNeXt model. In this work, subject independent evaluation was performed using a leave-one-subject-out protocol with stratified tenfold cross-validation. Experimental validation on two datasets, BCI Competition IV-2a and High Gamma Dataset (HGD), yielded accuracies of 90.20% and 96.1%, respectively. The analysis reveals physiological meaningful connectivity patterns, such as contralateral <italic>β</i>-band connectivity in BCI IV-2a and enhanced interhemispheric <italic>γ</i>-band integration in HGD. Ablation studies and benchmark comparison confirmed that the proposed framework achieved high classification accuracy, demonstrating the efficiency and potential for robust, real-time subject-independent MI–BCI applications.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"10 3","pages":"1-4"},"PeriodicalIF":2.2,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}