Pub Date : 2026-01-30DOI: 10.1109/JSEN.2026.3657850
Adrian Gheorghiu;Tunc Alkanat;Ashish Pandharipande
Radar is a core sensor modality for scene perception to achieve higher levels of autonomy in automotive driving. A common occurrence in automotive radars is clutter—detections of nonexistent moving objects, that can adversely impact target detection and classification performance and subsequent driving actions. We propose lightweight tiny graph transformer network (TGTNet) models for classifying clutter from stationary and moving targets in the scene. Performance evaluation on the public RadarScenes show that our proposed TGTNet models achieve similar classification performance in precision and recall metrics in comparison to state-of-the-art models, with one to two orders of magnitude lower model size and significantly faster inference.
{"title":"Lightweight Graph Transformers for Clutter and Target Classification in Automotive Radar","authors":"Adrian Gheorghiu;Tunc Alkanat;Ashish Pandharipande","doi":"10.1109/JSEN.2026.3657850","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3657850","url":null,"abstract":"Radar is a core sensor modality for scene perception to achieve higher levels of autonomy in automotive driving. A common occurrence in automotive radars is clutter—detections of nonexistent moving objects, that can adversely impact target detection and classification performance and subsequent driving actions. We propose lightweight tiny graph transformer network (TGTNet) models for classifying clutter from stationary and moving targets in the scene. Performance evaluation on the public RadarScenes show that our proposed TGTNet models achieve similar classification performance in precision and recall metrics in comparison to state-of-the-art models, with one to two orders of magnitude lower model size and significantly faster inference.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 6","pages":"9339-9346"},"PeriodicalIF":4.3,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147440610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-29DOI: 10.1109/JSEN.2026.3656322
Minh Long Hoang
Accurate fall direction recognition is essential for developing effective fall prevention and intervention systems, yet it remains challenging due to the subtle differences in motion patterns across fall types. This research proposes a stagewise optimization framework for fall type recognition (SOFFDR), which systematically enhances classification performance through four sequential stages: 1) classifier selection via $K$ -fold cross-validation over ten candidate algorithms; 2) superior filtering method determination; 3) optimal window time tracking for segmentbased feature extraction with Shapley additive explanations (SHAP) analysis; and 4) final classification using the best parameter combination from all previous stages. The framework was evaluated on wearable inertial measurement unit (IMU) data and compared against a traditional feature vector approach in which each recording is treated as a single instance. This feature vector method achieved an accuracy of 71% (macro $F1$ -score = 0.72), with significant misclassifications between similar fall types. In contrast, the proposed SOFFDR system achieved 100% accuracy and perfect precision, recall, and F1-scores across all fall categories. These results highlight the critical role of systematic stagewise optimization, temporal segmentation, and filtering in enhancing fall type recognition performance from wearable sensor data. The proposed framework demonstrates its potential for high-precision fall monitoring applications in healthcare and assisted living environments.
{"title":"Stagewise Optimization Framework for Fall Direction Recognition From Wearable Sensor Data Based on Machine Learning","authors":"Minh Long Hoang","doi":"10.1109/JSEN.2026.3656322","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3656322","url":null,"abstract":"Accurate fall direction recognition is essential for developing effective fall prevention and intervention systems, yet it remains challenging due to the subtle differences in motion patterns across fall types. This research proposes a stagewise optimization framework for fall type recognition (SOFFDR), which systematically enhances classification performance through four sequential stages: 1) classifier selection via <inline-formula> <tex-math>$K$ </tex-math></inline-formula>-fold cross-validation over ten candidate algorithms; 2) superior filtering method determination; 3) optimal window time tracking for segmentbased feature extraction with Shapley additive explanations (SHAP) analysis; and 4) final classification using the best parameter combination from all previous stages. The framework was evaluated on wearable inertial measurement unit (IMU) data and compared against a traditional feature vector approach in which each recording is treated as a single instance. This feature vector method achieved an accuracy of 71% (macro <inline-formula> <tex-math>$F1$ </tex-math></inline-formula>-score = 0.72), with significant misclassifications between similar fall types. In contrast, the proposed SOFFDR system achieved 100% accuracy and perfect precision, recall, and F1-scores across all fall categories. These results highlight the critical role of systematic stagewise optimization, temporal segmentation, and filtering in enhancing fall type recognition performance from wearable sensor data. The proposed framework demonstrates its potential for high-precision fall monitoring applications in healthcare and assisted living environments.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7755-7769"},"PeriodicalIF":4.3,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11368688","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sorting discarded fabrics is a critical yet challenging task in textile recycling due to the diversity of material types and surface textures. We present a vision–tactile robotic system leveraging multimodal sensing to enable accurate fabric recognition and adaptive grasping. The system employs a stereo RGB camera with MobileNet-SSD on the Myriad X chip for coarse object detection and 3-D localization, achieving a mean average precision (mAP50) of 93.50% at 23 FPS. For fine-grained texture classification, tactile images are processed by a lightweight MobileNetv3- Textile model on NVIDIA Jetson Orin, achieving 27.3 FPS with 8.5-ms inference latency. Two complementary datasets were constructed: a visual dataset with 20 fabric categories for appearance-based classification and a tactile dataset with 191 categories capturing weaving patterns for precise texture discrimination. Sensor fusion is performed in real time, integrating visual and tactile modalities to enhance recognition accuracy and grasp reliability. A resource-constrained control unit manages tactile processing, gripper force modulation via optical flow, and sensor coordination. Experimental evaluation demonstrates that the proposed multimodal sensing approach significantly improves perception robustness and operational efficiency, providing a scalable solution for automated fabric handling in recycling. We release the dataset in https://github.com/AumnceLi/Visual-tactile-fabricdataset.git
{"title":"Vision–Tactile Sensor Fusion System for Fabric Sorting and Robotic Grasping in Textile Recycling","authors":"Jiayao Li;Yu Gao;Yijia Yan;Zhenke Li;Xin Wu;Jipeng Huang","doi":"10.1109/JSEN.2026.3656234","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3656234","url":null,"abstract":"Sorting discarded fabrics is a critical yet challenging task in textile recycling due to the diversity of material types and surface textures. We present a vision–tactile robotic system leveraging multimodal sensing to enable accurate fabric recognition and adaptive grasping. The system employs a stereo RGB camera with MobileNet-SSD on the Myriad X chip for coarse object detection and 3-D localization, achieving a mean average precision (mAP50) of 93.50% at 23 FPS. For fine-grained texture classification, tactile images are processed by a lightweight MobileNetv3- Textile model on NVIDIA Jetson Orin, achieving 27.3 FPS with 8.5-ms inference latency. Two complementary datasets were constructed: a visual dataset with 20 fabric categories for appearance-based classification and a tactile dataset with 191 categories capturing weaving patterns for precise texture discrimination. Sensor fusion is performed in real time, integrating visual and tactile modalities to enhance recognition accuracy and grasp reliability. A resource-constrained control unit manages tactile processing, gripper force modulation via optical flow, and sensor coordination. Experimental evaluation demonstrates that the proposed multimodal sensing approach significantly improves perception robustness and operational efficiency, providing a scalable solution for automated fabric handling in recycling. We release the dataset in <uri>https://github.com/AumnceLi/Visual-tactile-fabricdataset.git</uri>","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7645-7658"},"PeriodicalIF":4.3,"publicationDate":"2026-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/JSEN.2026.3655108
Tunahan Timucin
In this study, a smart contract-based wireless cognitive radio sensor network is proposed to secure the monitoring process in precision agriculture using a deep learning algorithm. In wireless cognitive radio sensor networks, wireless secondary sensor nodes and the access point communicate opportunistically over the available spectrum without harming the primary users. The secondary sensor nodes and the access point use the frequency division multiple access (FDMA) technique to communicate over spectrum holes, also known as white space. Secondary sensor nodes collect humidity, pressure, and temperature values for environmental monitoring in precision agriculture. In addition to the additive white Gaussian noise (AWGN) channel, Rayleigh and Rician channels are modeled to account for distortions such as fading, weather, and noise in precision agriculture conditions. Sensor networks used for precision agriculture face challenges such as data integrity and spectrum scarcity. Our proposed sensor network utilizes blockchain-supported smart contracts to ensure secure data communication and dynamic spectrum access to improve communication quality. Due to incorrectly reported soil moisture levels, precision agriculture land can be subject to overirrigation or underirrigation. A deep learning-based smart contract system is used to distinguish between malicious and honest users. In some special cases, honest users may appear to be malicious due to distortion. Honest users are protected from being identified as malicious due to deteriorating parameters such as received signal strength indicator (RSSI) and SNR. Simulation results show that the proposed system achieves a detection probability of up to 92%, an average energy consumption of 1.13 J, and a detection efficiency of 64%. The rationality and applicability of the proposed sensor network for secure monitoring in precision agriculture are verified through comparative graphical results.
{"title":"Deep Learning-Based Cognitive Radio Sensor Network With Smart Contract for Precision Agriculture","authors":"Tunahan Timucin","doi":"10.1109/JSEN.2026.3655108","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3655108","url":null,"abstract":"In this study, a smart contract-based wireless cognitive radio sensor network is proposed to secure the monitoring process in precision agriculture using a deep learning algorithm. In wireless cognitive radio sensor networks, wireless secondary sensor nodes and the access point communicate opportunistically over the available spectrum without harming the primary users. The secondary sensor nodes and the access point use the frequency division multiple access (FDMA) technique to communicate over spectrum holes, also known as white space. Secondary sensor nodes collect humidity, pressure, and temperature values for environmental monitoring in precision agriculture. In addition to the additive white Gaussian noise (AWGN) channel, Rayleigh and Rician channels are modeled to account for distortions such as fading, weather, and noise in precision agriculture conditions. Sensor networks used for precision agriculture face challenges such as data integrity and spectrum scarcity. Our proposed sensor network utilizes blockchain-supported smart contracts to ensure secure data communication and dynamic spectrum access to improve communication quality. Due to incorrectly reported soil moisture levels, precision agriculture land can be subject to overirrigation or underirrigation. A deep learning-based smart contract system is used to distinguish between malicious and honest users. In some special cases, honest users may appear to be malicious due to distortion. Honest users are protected from being identified as malicious due to deteriorating parameters such as received signal strength indicator (RSSI) and SNR. Simulation results show that the proposed system achieves a detection probability of up to 92%, an average energy consumption of 1.13 J, and a detection efficiency of 64%. The rationality and applicability of the proposed sensor network for secure monitoring in precision agriculture are verified through comparative graphical results.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7806-7814"},"PeriodicalIF":4.3,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-23DOI: 10.1109/JSEN.2026.3655189
Jyoti;Tamal Pal
In the application of wireless multimedia sensor networks (WMSNs), wirelessmultimedia sensor (WMS) nodes generate a massive amount of multimedia data, such as images, audio, and video. However, this results in a significant amount of redundant multimedia data that requires enormous energy resources for processing and communication. Since randomly deployed nodes in the network have limited energy resources, they suffer from a short network lifetime due to the unnecessary energy consumption required for processing and communicating the redundant data. In this article, we propose a learning automata-based node scheduling (LANS) algorithm based on a learning automata (LA) technique to address the short network lifetime issue of multimedia sensor networks. This learning-based scheduling algorithm resides inside each node and tries to learn the optimal scheduling strategy to conserve energy resources. In this article, we also propose a subroutine called the redundancy measurement subalgorithm (RMS) that the proposed algorithm calls during its learning phase to find the redundancy level of the image data. The main objective of the proposed algorithm is to enable each node to learn its optimal action based on the redundancy level and the energy level, so that the node with highly redundant image data and low energy levels switches to the sleep state to save energy. To show the performance of the proposed algorithm, our work is authenticated by results demonstrating its efficiency in scheduling nodes. Hence, it is found that the proposed scheduling algorithm achieves an increment of 50.2% in network lifetime and 27.8% decrement in average energy consumption compared to the state-of-the-art algorithms.
{"title":"Learning Automata-Based Node Scheduling Algorithm in Multimedia Sensor Networks","authors":"Jyoti;Tamal Pal","doi":"10.1109/JSEN.2026.3655189","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3655189","url":null,"abstract":"In the application of wireless multimedia sensor networks (WMSNs), wirelessmultimedia sensor (WMS) nodes generate a massive amount of multimedia data, such as images, audio, and video. However, this results in a significant amount of redundant multimedia data that requires enormous energy resources for processing and communication. Since randomly deployed nodes in the network have limited energy resources, they suffer from a short network lifetime due to the unnecessary energy consumption required for processing and communicating the redundant data. In this article, we propose a learning automata-based node scheduling (LANS) algorithm based on a learning automata (LA) technique to address the short network lifetime issue of multimedia sensor networks. This learning-based scheduling algorithm resides inside each node and tries to learn the optimal scheduling strategy to conserve energy resources. In this article, we also propose a subroutine called the redundancy measurement subalgorithm (RMS) that the proposed algorithm calls during its learning phase to find the redundancy level of the image data. The main objective of the proposed algorithm is to enable each node to learn its optimal action based on the redundancy level and the energy level, so that the node with highly redundant image data and low energy levels switches to the sleep state to save energy. To show the performance of the proposed algorithm, our work is authenticated by results demonstrating its efficiency in scheduling nodes. Hence, it is found that the proposed scheduling algorithm achieves an increment of 50.2% in network lifetime and 27.8% decrement in average energy consumption compared to the state-of-the-art algorithms.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7815-7825"},"PeriodicalIF":4.3,"publicationDate":"2026-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/JSEN.2026.3653786
Jingyun Xu;Chenghui Mo;Kexin Fang;Xinzhu Lin
Existing quality-relevant stationary subspace analysis methods suffer from significant performance degradation in the presence of outliers in industrial processes, primarily due to the failure of Gaussian distribution assumptions. In addition, these methods do not account for the dynamic correlations between adjacent time-series samples, leading to suboptimal model performance. To address these issues, this article proposes a dynamic robust projection to stationary subspace regression (DRPSSR) for quality prediction. First, process variables are decoupled into quality-relevant stationary, qualityrelevant nonstationary, quality-irrelevant stationary, and quality-irrelevant nonstationary latent variables, thereby reducing interference from quality-irrelevant information in the dynamic transmission and prediction of quality-related information. Leveraging the heavy-tailed property of the t-distribution, the Gaussian distribution assumption for latent space parameters is replaced with a t-distribution to enhance the model's robustness to outliers. Furthermore, considering that long short-term memory (LSTM) networks balance long-term memory and short-term inputs through gating mechanisms and cell states, an LSTM is introduced to model historical quality-relevant stationary and nonstationary latent variables as state variables, enabling the propagation of dynamic information. Numerical simulations and an industrial case study on a debutanizer column demonstrate that the proposed model significantly improves prediction accuracy on industrial datasets containing outliers, validating its effectiveness and engineering applicability for soft sensor modeling in complex industrial processes.
{"title":"Dynamic Robust Projection to Stationary Subspace Regression for Quality Prediction","authors":"Jingyun Xu;Chenghui Mo;Kexin Fang;Xinzhu Lin","doi":"10.1109/JSEN.2026.3653786","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3653786","url":null,"abstract":"Existing quality-relevant stationary subspace analysis methods suffer from significant performance degradation in the presence of outliers in industrial processes, primarily due to the failure of Gaussian distribution assumptions. In addition, these methods do not account for the dynamic correlations between adjacent time-series samples, leading to suboptimal model performance. To address these issues, this article proposes a dynamic robust projection to stationary subspace regression (DRPSSR) for quality prediction. First, process variables are decoupled into quality-relevant stationary, qualityrelevant nonstationary, quality-irrelevant stationary, and quality-irrelevant nonstationary latent variables, thereby reducing interference from quality-irrelevant information in the dynamic transmission and prediction of quality-related information. Leveraging the heavy-tailed property of the t-distribution, the Gaussian distribution assumption for latent space parameters is replaced with a t-distribution to enhance the model's robustness to outliers. Furthermore, considering that long short-term memory (LSTM) networks balance long-term memory and short-term inputs through gating mechanisms and cell states, an LSTM is introduced to model historical quality-relevant stationary and nonstationary latent variables as state variables, enabling the propagation of dynamic information. Numerical simulations and an industrial case study on a debutanizer column demonstrate that the proposed model significantly improves prediction accuracy on industrial datasets containing outliers, validating its effectiveness and engineering applicability for soft sensor modeling in complex industrial processes.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7630-7644"},"PeriodicalIF":4.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1109/JSEN.2026.3654268
Liam Rees;Tunc Alkanat;Nitin Jonathan Myers;Ashish Pandharipande
We consider the problem of generating automotive radar super-resolution maps from low-resolution radar maps and camera images. This problem is relevant in automotive driving for synthetic sensor data generation to support improved environmental perception. We propose a radar super-resolution sensing approach based on multimodal data fusion between low-resolution radar rangeazimuth (RA) maps and aligned camera images. Our method employs a U-Net-based autoencoder architecture enhanced with visual features extracted from a pretrained ResNet50 encoder, enabling the model to generate high-resolution RA maps that approximate ground truth radar data. We evaluate the proposed method on the RADIal and RaDICaL datasets, which cover diverse driving environments and radar configurations. Quantitative and qualitative results demonstrate that our approach outperforms a baseline model and prior state-of-the-art methods, particularly in resolving fine spatial details in scenarios with closely spaced vehicles and pedestrians.
{"title":"Automotive Radar Super-Resolution Sensing With Deep Camera Fusion","authors":"Liam Rees;Tunc Alkanat;Nitin Jonathan Myers;Ashish Pandharipande","doi":"10.1109/JSEN.2026.3654268","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3654268","url":null,"abstract":"We consider the problem of generating automotive radar super-resolution maps from low-resolution radar maps and camera images. This problem is relevant in automotive driving for synthetic sensor data generation to support improved environmental perception. We propose a radar super-resolution sensing approach based on multimodal data fusion between low-resolution radar rangeazimuth (RA) maps and aligned camera images. Our method employs a U-Net-based autoencoder architecture enhanced with visual features extracted from a pretrained ResNet50 encoder, enabling the model to generate high-resolution RA maps that approximate ground truth radar data. We evaluate the proposed method on the RADIal and RaDICaL datasets, which cover diverse driving environments and radar configurations. Quantitative and qualitative results demonstrate that our approach outperforms a baseline model and prior state-of-the-art methods, particularly in resolving fine spatial details in scenarios with closely spaced vehicles and pedestrians.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7838-7846"},"PeriodicalIF":4.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1109/JSEN.2026.3653957
Po-Yen Lin;Shih-Wei Lo;Ronald Y. Chang;Wei-Ho Chung
Stepped-frequency continuous-wave (SFCW) radar has emerged as a promising technology for noncontact vital sign monitoring, particularly in multisubject scenarios. Compared to other radar modalities, SFCW provides fine range resolution and stable phase response, making it well-suited for capturing subtle physiological movements in complex environments. In this work, we propose a respiration rate detection framework using a multiple-input multipleoutput (MIMO) SFCW radar system. The core of our method is a jointly optimized spatial filter, derived from a constrained optimization problem, which suppresses interference in both angular and distance domains. The solution is obtained using the Lagrange multiplier method, enabling efficient and robust spatial filtering. To enhance signal robustness, we further introduce a 3-channel spatial diversity strategy that leverages not only the target’s direct path but also two neighboring spatial channels selected based on the physical chest width. This design helps mitigate spatial ambiguity and improve signal quality in the presence of multiple subjects. The filtered signals are then transformed using the Fourier transform to estimate the respiratory frequencies. Experimental results on a public radar dataset validate the effectiveness of the proposed approach, demonstrating lower estimation errors and improved multisubject separation performance compared to existing methods.
{"title":"Multisubject Respiration Rate Detection via Angle-Distance Domain Interference Suppression in MIMO-SFCW Radar","authors":"Po-Yen Lin;Shih-Wei Lo;Ronald Y. Chang;Wei-Ho Chung","doi":"10.1109/JSEN.2026.3653957","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3653957","url":null,"abstract":"Stepped-frequency continuous-wave (SFCW) radar has emerged as a promising technology for noncontact vital sign monitoring, particularly in multisubject scenarios. Compared to other radar modalities, SFCW provides fine range resolution and stable phase response, making it well-suited for capturing subtle physiological movements in complex environments. In this work, we propose a respiration rate detection framework using a multiple-input multipleoutput (MIMO) SFCW radar system. The core of our method is a jointly optimized spatial filter, derived from a constrained optimization problem, which suppresses interference in both angular and distance domains. The solution is obtained using the Lagrange multiplier method, enabling efficient and robust spatial filtering. To enhance signal robustness, we further introduce a 3-channel spatial diversity strategy that leverages not only the target’s direct path but also two neighboring spatial channels selected based on the physical chest width. This design helps mitigate spatial ambiguity and improve signal quality in the presence of multiple subjects. The filtered signals are then transformed using the Fourier transform to estimate the respiratory frequencies. Experimental results on a public radar dataset validate the effectiveness of the proposed approach, demonstrating lower estimation errors and improved multisubject separation performance compared to existing methods.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7743-7754"},"PeriodicalIF":4.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11358841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Exploring brain regions associated with cognitive impairment has been an important area of neuroimaging research. Brain networks extracted from functional magnetic resonance imaging (fMRI) have shown promising performance in cognitive disorder diagnosis. Graph convolutional networks exhibit robust feature extraction and show good performance in brain disorder diagnosis. Traditional brain networks model pairs of brain region relationships to encode the whole brain via graph neural networks (GNNs). However, the local heterogeneity of brain networks is ignored, and the performance is not satisfactory. To explore reliable patterns of brain networks, we propose the Neurofield-attentive graph learning (Neurofield-AGL), an advanced brain network analysis framework for discovering neurobiomarkers of brain cognitive disorder. First, we construct the brain network for each subject from fMRI. Considering the heterogeneity of localized regions of the brain network, we mine a Neurofield for each brain region through the local topology of the brain network, emphasizing the relevant brain regions that are important to the current brain region. The Neurofield topology representation is encoded into node features through Neurofield encoding. We further propose the Neurofieldaware graph network to obtain discriminative representations of brain regions from intra- and inter-Neurofield. Finally, the context-driven feature synergy fuses cross-layer contextual embeddings to get the final graph embedding for prediction. We apply Neurofield-AGL for ASD diagnostics on autism brain imaging data exchange (ABIDE) dataset and MDD diagnostics on the Zhongdaxinxiang dataset. Comprehensive experiments show that Neurofield-AGL significantly outperforms the state-of-the-art methods, demonstrating its potential to understand and diagnose brain cognitive disorders.
{"title":"NeuroField-AGL: NeuroField-Attentive Graph Learning on Functional Connectivity for Mental Disorder Diagnosis","authors":"Yueying Li;Jiaxing Li;Yue Zhou;Youyong Kong;Yonggui Yuan","doi":"10.1109/JSEN.2026.3653546","DOIUrl":"https://doi.org/10.1109/JSEN.2026.3653546","url":null,"abstract":"Exploring brain regions associated with cognitive impairment has been an important area of neuroimaging research. Brain networks extracted from functional magnetic resonance imaging (fMRI) have shown promising performance in cognitive disorder diagnosis. Graph convolutional networks exhibit robust feature extraction and show good performance in brain disorder diagnosis. Traditional brain networks model pairs of brain region relationships to encode the whole brain via graph neural networks (GNNs). However, the local heterogeneity of brain networks is ignored, and the performance is not satisfactory. To explore reliable patterns of brain networks, we propose the Neurofield-attentive graph learning (Neurofield-AGL), an advanced brain network analysis framework for discovering neurobiomarkers of brain cognitive disorder. First, we construct the brain network for each subject from fMRI. Considering the heterogeneity of localized regions of the brain network, we mine a Neurofield for each brain region through the local topology of the brain network, emphasizing the relevant brain regions that are important to the current brain region. The Neurofield topology representation is encoded into node features through Neurofield encoding. We further propose the Neurofieldaware graph network to obtain discriminative representations of brain regions from intra- and inter-Neurofield. Finally, the context-driven feature synergy fuses cross-layer contextual embeddings to get the final graph embedding for prediction. We apply Neurofield-AGL for ASD diagnostics on autism brain imaging data exchange (ABIDE) dataset and MDD diagnostics on the Zhongdaxinxiang dataset. Comprehensive experiments show that Neurofield-AGL significantly outperforms the state-of-the-art methods, demonstrating its potential to understand and diagnose brain cognitive disorders.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7730-7742"},"PeriodicalIF":4.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}