Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.
{"title":"Flux-Directional Orthogonal Differential Probe for Low-Frequency Eddy-Current Nondestructive Testing","authors":"Junmei Tian;Jie Zhang;Wujun Kui;Xiaoguang Cao;Ziqi Liang","doi":"10.1109/JSEN.2025.3611949","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3611949","url":null,"abstract":"Eddy-current probes are widely used to enable noncontact, high-speed detection in the operation and maintenance of metal pipelines and sheets. This study proposes a spatially orthogonal differential eddy-current probe based on the magnetic-flux directional extraction to address the issue of weak detection signals during low-frequency eddy-current testing of millimeter-scale surface defects on metal sheets. A simulation model of the probe is developed using COMSOL Multiphysics to analyze the magnetic-flux distribution and induced electromotive force (EMF) characteristics of both traditional runway-shaped and refined spatially orthogonal differential probes during defect detection. An experimental platform is constructed to compare defect signals of varying sizes at different detection speeds. Simulation results indicate that the induced EMF amplitude in the detection coil of the refined probe is approximately 3.3 times greater than that of the traditional runway-shaped differential eddy-current probe. Experimental findings confirm that the refined probe, operating at 2 m/s, can reliably detect defects with a width and depth of 0.5 mm.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40651-40659"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1109/JSEN.2025.3612050
Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang
Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.
{"title":"Multieye Visual Fusion Encoderless Control With Permanent Magnet Synchronous Machines","authors":"Jingqi Dong;Le Sun;Longmiao Chen;Kuan Wang","doi":"10.1109/JSEN.2025.3612050","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612050","url":null,"abstract":"Motion capture (MoCap) technology can be implemented using computer vision (CV)-based target position sensing methods. In modern industrial applications, CV-based position measurement techniques are increasingly emerging as a promising alternative to traditional encoders in servo drives, offering the potential to reduce system costs while maintaining performance. Although MoCap technologies have made significant progress in the past decades, CV-based systems still face challenges related to limited measurement accuracy and delayed real-time responsiveness, especially in cost-sensitive applications where both precise position recognition and real-time response are critical. To resolve these limitations, this article presents a visual-electromechanical (EM) sensing fusion control framework. A color visual wavelet-Transformer (CVWT) network is designed that utilizes color features as input, effectively preserving critical information while reducing training complexity and computational cost. The CVWT network integrates a wavelet transform module with a Transformer module to perform multiscale and multilevel feature extraction and modeling on visual data acquired from dual cameras. In addition, electrical and mechanical models are incorporated into the state estimation framework, and an extended Kalman filter (EKF) is employed to fuse multisource perceptual data. The experimental results demonstrate that under a maximum rotational speed of 25 r/min, the system achieves a position control accuracy of up to 0.47°, validating the effectiveness and feasibility of the proposed method within a low-cost vision-based framework.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40901-40912"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1109/JSEN.2025.3612094
Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee
Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.
{"title":"Sensing Force Dynamics of Prehensile Grip During Object Slippage Using a Slip Inducing Device","authors":"Ayesha Tooba Khan;Deepak Joshi;Biswarup Mukherjee","doi":"10.1109/JSEN.2025.3612094","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3612094","url":null,"abstract":"Understanding the force dynamics during object slippage is crucial in effectively improving the manipulation dexterity. Force dynamics during object slippage will be varied based on the characteristics of the mechanical stimuli. This work is the first to explore force dynamics while considering the simultaneous effects of slip direction, distance, and speed variations. We performed the experiment with healthy individuals to explore how the hand kinetics will be modulated during the reflex and the voluntary phases based on the choice of slip direction, slip distance, and slip speed. Our results reveal that the force dynamics significantly depend on the slip direction. However, we observed that the variation pattern differed depending on the reflex and voluntary phases of the hand kinetics. We also observe that the force dynamics were modulated depending on the significant interactions of slip distance and slip speed in a particular slip direction. The experiment was designed to closely mimic the real-life scenario of object slippage. Thus, the findings can significantly contribute to advanced sensorimotor rehabilitation strategies, haptic feedback systems, and mechatronic devices.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40660-40667"},"PeriodicalIF":4.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-23DOI: 10.1109/JSEN.2025.3610989
Cong Cao;Guoli Bai;Ziyue Zhao;Yuxiang Yan;Huqiang Wang;Liang Sun
With the advancement of health monitoring technologies in the era of smart industry, vast amounts of sensor data are continuously collected from rotating machinery. However, labeling this data remains a major bottleneck in industrial applications. This article proposes a novel unsupervised learning framework for fault diagnosis, based on the assumption that sensor signals within adjacent time intervals exhibit high similarity in health states. By maximizing the proximity between non-overlapping, temporally adjacent sample segments, the proposed method effectively extracts discriminative features without requiring knowledge of the number of fault types. The approach is evaluated on three public benchmark datasets through unsupervised clustering and label matching. Experimental results show that the method significantly outperforms existing unsupervised techniques and achieves accurate label alignment without expert intervention.
{"title":"A Proximity-Based Unsupervised Feature Learning Framework for Rotating Machinery Sensor Data","authors":"Cong Cao;Guoli Bai;Ziyue Zhao;Yuxiang Yan;Huqiang Wang;Liang Sun","doi":"10.1109/JSEN.2025.3610989","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3610989","url":null,"abstract":"With the advancement of health monitoring technologies in the era of smart industry, vast amounts of sensor data are continuously collected from rotating machinery. However, labeling this data remains a major bottleneck in industrial applications. This article proposes a novel unsupervised learning framework for fault diagnosis, based on the assumption that sensor signals within adjacent time intervals exhibit high similarity in health states. By maximizing the proximity between non-overlapping, temporally adjacent sample segments, the proposed method effectively extracts discriminative features without requiring knowledge of the number of fault types. The approach is evaluated on three public benchmark datasets through unsupervised clustering and label matching. Experimental results show that the method significantly outperforms existing unsupervised techniques and achieves accurate label alignment without expert intervention.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40892-40900"},"PeriodicalIF":4.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uncrewed aerial vehicles (UAVs) possess high maneuverability and wide viewing angles, rendering them ideal as flying base stations (BSs) for resource-constrained Internet of Things (IoT) sensors. For real-time information acquisition and sustainable energy support for numerous IoT devices, an appropriate number of UAVs is required to be efficiently deployed for data and energy transfer tasks. However, existing methods face challenges in minimizing the average age of information (AoI) due to the complex coupling between trajectory planning and transmission scheduling decisions and the need for efficient coordination in resource-constrained UAV networks. These domain-specific challenges require specialized solutions that effectively balance information freshness and energy efficiency. To address these challenges, we first decompose the scheduling problem into two subproblems: trajectory optimization and transmission optimization. Based on this decomposition, we propose a hierarchical trajectory optimization and transmission scheduling (HTOTS) algorithm based on hierarchical reinforcement learning. The HTOTS algorithm employs deep reinforcement learning (DRL) to sense environmental states in real-time and dynamically adjust UAV flight trajectories and information acquisition, ensuring an effective balance between data and energy transfer. These subproblems are solved alternately through hierarchical reinforcement learning, which significantly reduces the complexity of each subproblem and improves convergence efficiency. Simulation results show that the proposed HTOTS algorithm outperforms existing methods in terms of average AoI and energy efficiency for various network scales and energy constraints.
{"title":"Trajectory Optimization for UAV-Assisted Communications Based on Hierarchical Reinforcement Learning","authors":"Huaguang Shi;Zichao Yu;Wenhao Yan;Wei Li;Lei Shi;Tianyong Ao;Yi Zhou","doi":"10.1109/JSEN.2025.3610107","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3610107","url":null,"abstract":"Uncrewed aerial vehicles (UAVs) possess high maneuverability and wide viewing angles, rendering them ideal as flying base stations (BSs) for resource-constrained Internet of Things (IoT) sensors. For real-time information acquisition and sustainable energy support for numerous IoT devices, an appropriate number of UAVs is required to be efficiently deployed for data and energy transfer tasks. However, existing methods face challenges in minimizing the average age of information (AoI) due to the complex coupling between trajectory planning and transmission scheduling decisions and the need for efficient coordination in resource-constrained UAV networks. These domain-specific challenges require specialized solutions that effectively balance information freshness and energy efficiency. To address these challenges, we first decompose the scheduling problem into two subproblems: trajectory optimization and transmission optimization. Based on this decomposition, we propose a hierarchical trajectory optimization and transmission scheduling (HTOTS) algorithm based on hierarchical reinforcement learning. The HTOTS algorithm employs deep reinforcement learning (DRL) to sense environmental states in real-time and dynamically adjust UAV flight trajectories and information acquisition, ensuring an effective balance between data and energy transfer. These subproblems are solved alternately through hierarchical reinforcement learning, which significantly reduces the complexity of each subproblem and improves convergence efficiency. Simulation results show that the proposed HTOTS algorithm outperforms existing methods in terms of average AoI and energy efficiency for various network scales and energy constraints.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40820-40833"},"PeriodicalIF":4.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Received signal strength (RSS)-based device-free localization (DFL) is commonly used in the Internet-of-Things (IoT) field. However, the current DFL algorithms have limitations in terms of stability and accuracy, which hinders the widespread application of DFL. Current research on DFL predominantly revolves around sparse representation and deep learning. The sparse representation method requires building a suitable dictionary to achieve higher accuracy, while the deep learning method is affected by data volume and computational complexity. In contrast to traditional localization methods that rely on raw data features, this article suggests using the deep dictionary learning (DDL) framework to extract depth features. Then, the extracted low-level and high-level features are not only used to construct a dictionary but also to reconstruct the testing data for DFL using the sparse representation classification. This approach leverages the advantages of sparse representation and deep learning to achieve highly accurate localization. The proposed DDL model involves learning multiple dictionaries with varying descriptive capabilities to extract deep features from the observed signal through a layer-by-layer DDL process. For better dictionary learning, we introduce the minimax-concave penalty (MCP) for each layer of dictionary learning. Utilizing the difference-of-convex (DC) programming, the formulated nonconvex problems are efficiently optimized. Furthermore, to enhance localization accuracy, the data are expanded to reinforce the essential features of DDL. The performance of the DCDDL algorithm was assessed using collected laboratory datasets and public datasets, demonstrating its superiority over existing localization algorithms.
{"title":"A Deep Dictionary Learning Framework for Device-Free Localization Based on Nonconvex Sparse Regularization and DC Programming","authors":"Benying Tan;Manman Wang;Yujie Li;Yongyun Lu;Shuxue Ding","doi":"10.1109/JSEN.2025.3605646","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3605646","url":null,"abstract":"Received signal strength (RSS)-based device-free localization (DFL) is commonly used in the Internet-of-Things (IoT) field. However, the current DFL algorithms have limitations in terms of stability and accuracy, which hinders the widespread application of DFL. Current research on DFL predominantly revolves around sparse representation and deep learning. The sparse representation method requires building a suitable dictionary to achieve higher accuracy, while the deep learning method is affected by data volume and computational complexity. In contrast to traditional localization methods that rely on raw data features, this article suggests using the deep dictionary learning (DDL) framework to extract depth features. Then, the extracted low-level and high-level features are not only used to construct a dictionary but also to reconstruct the testing data for DFL using the sparse representation classification. This approach leverages the advantages of sparse representation and deep learning to achieve highly accurate localization. The proposed DDL model involves learning multiple dictionaries with varying descriptive capabilities to extract deep features from the observed signal through a layer-by-layer DDL process. For better dictionary learning, we introduce the minimax-concave penalty (MCP) for each layer of dictionary learning. Utilizing the difference-of-convex (DC) programming, the formulated nonconvex problems are efficiently optimized. Furthermore, to enhance localization accuracy, the data are expanded to reinforce the essential features of DDL. The performance of the DCDDL algorithm was assessed using collected laboratory datasets and public datasets, demonstrating its superiority over existing localization algorithms.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40877-40891"},"PeriodicalIF":4.3,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18DOI: 10.1109/JSEN.2025.3608900
Hang Zhang;Qi Li;Wenli Zhao;Yan Wu
Sleep stage classification is essential for diagnosing sleep disorders and assessing sleep quality. However, achieving accurate classification remains a challenge because of the non-Euclidean spatial distribution of electroencephalography (EEG) electrodes. Existing methods often focus on a single dimension (e.g., the temporal, spectral, or spatial domain) or at most two dimensions of EEG signals, failing to fully capture their temporal–spectral–spatial multidimensional features and dynamic correlations. In addition, the heterogeneity of and complex interactions between brain regions, as well as individual variability, further hinder robust classification. To tackle this challenge, this article presents a new heterogeneous graph adaptive neural network (THSSleepNet), which employs temporal–spectral–spatial multidimensional feature fusion. The network represents EEG signals as a heterogeneous graph sequence structure, incorporating the characteristics of their temporal–spatial and spectral–spatial correlations. THSSleepNet comprises two branches, temporal–spatial streams and spectral–spatial streams, enabling comprehensive feature extraction across the temporal, spectral, and spatial dimensions. The model incorporates local and global temporal–spatial and spectral–spatial heterogeneity of brain regions using the dynamic multiscale path generation module (DMPGM). In addition, the graph attention module captures intricate interactions among brain regions, while the temporal/spectral adaptive module adaptively accounts for cross-scale dynamic context dependence across temporal–spectral dimensions. Subsequently, a hierarchical feature pyramid fusion (HFPF) module is employed to fuse temporal–spectral–spatial features of EEG signals. In addition, a domain adversarial learning mechanism mitigates the effect of individual variability on classification performance. The experimental results indicate that THSSleepNet surpasses current methods on the publicly available datasets, demonstrating its great potential for use in sensor-based EEG signal analysis and sleep monitoring.
{"title":"Temporal–Spectral–Spatial Multidimensional Feature Fusion-Based Heterogeneous Graph Adaptive Neural Network for Sleep Stage Classification","authors":"Hang Zhang;Qi Li;Wenli Zhao;Yan Wu","doi":"10.1109/JSEN.2025.3608900","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3608900","url":null,"abstract":"Sleep stage classification is essential for diagnosing sleep disorders and assessing sleep quality. However, achieving accurate classification remains a challenge because of the non-Euclidean spatial distribution of electroencephalography (EEG) electrodes. Existing methods often focus on a single dimension (e.g., the temporal, spectral, or spatial domain) or at most two dimensions of EEG signals, failing to fully capture their temporal–spectral–spatial multidimensional features and dynamic correlations. In addition, the heterogeneity of and complex interactions between brain regions, as well as individual variability, further hinder robust classification. To tackle this challenge, this article presents a new heterogeneous graph adaptive neural network (THSSleepNet), which employs temporal–spectral–spatial multidimensional feature fusion. The network represents EEG signals as a heterogeneous graph sequence structure, incorporating the characteristics of their temporal–spatial and spectral–spatial correlations. THSSleepNet comprises two branches, temporal–spatial streams and spectral–spatial streams, enabling comprehensive feature extraction across the temporal, spectral, and spatial dimensions. The model incorporates local and global temporal–spatial and spectral–spatial heterogeneity of brain regions using the dynamic multiscale path generation module (DMPGM). In addition, the graph attention module captures intricate interactions among brain regions, while the temporal/spectral adaptive module adaptively accounts for cross-scale dynamic context dependence across temporal–spectral dimensions. Subsequently, a hierarchical feature pyramid fusion (HFPF) module is employed to fuse temporal–spectral–spatial features of EEG signals. In addition, a domain adversarial learning mechanism mitigates the effect of individual variability on classification performance. The experimental results indicate that THSSleepNet surpasses current methods on the publicly available datasets, demonstrating its great potential for use in sensor-based EEG signal analysis and sleep monitoring.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40793-40805"},"PeriodicalIF":4.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-18DOI: 10.1109/JSEN.2025.3605144
Kai Huang;Guozhu Jia;Zeyu Jiao;Qun Wang;Feiyu Huang;Yingjie Cai
The evolution of the aviation industry has led to increased demands for enhanced reliability in aviation equipment, underscoring the need for reliable system operations, reduced production costs, and the prevention of unscheduled downtimes. Traditional maintenance methods, such as fault correction and time-based preventive maintenance, are becoming increasingly inadequate due to the heightened complexity and precision requirements of modern aviation equipment. To this end, an adaptive spatiotemporal graph attention network (GAT) based on relation mining is proposed for predicting the remaining useful life (RUL) of complex aviation equipment. This method starts by processing raw equipment data through a temporal extractor, capturing time-dependent patterns and inherent features. It then applies a relation mining algorithm, inspired by the Decision Making Trial and Evaluation Laboratory (DEMATEL) method, to identify multiorder coupling relationships among sensor data, creating a dynamic relationship matrix that encapsulates these temporal features. This matrix, along with temporal data, is integrated into a spatiotemporal graph neural network (GNN) for effective information fusion, emphasizing key features from both spatial and temporal domains. Compared with the state-of-the-art methods, the experimental results on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset demonstrate the superior performance, with the root mean square error (RMSE) value improvements of 1.32%, 0.77%, 6.06%, and 14.98% across four subsets, respectively. By merging traditional DEMATEL relationship mining with GNN technology and embedding artificial intelligence within domain knowledge to model complex systems, this method accurately predicts RUL within complex aviation systems, demonstrating superior efficacy and performance. The proposed method offers significant potential for enhancing system reliability and safety in the aviation industry.
{"title":"Remaining Useful Life Prediction Through Adaptive Spatiotemporal Graph Neural Network Based on Relationship Mining for Complex Aviation Equipment","authors":"Kai Huang;Guozhu Jia;Zeyu Jiao;Qun Wang;Feiyu Huang;Yingjie Cai","doi":"10.1109/JSEN.2025.3605144","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3605144","url":null,"abstract":"The evolution of the aviation industry has led to increased demands for enhanced reliability in aviation equipment, underscoring the need for reliable system operations, reduced production costs, and the prevention of unscheduled downtimes. Traditional maintenance methods, such as fault correction and time-based preventive maintenance, are becoming increasingly inadequate due to the heightened complexity and precision requirements of modern aviation equipment. To this end, an adaptive spatiotemporal graph attention network (GAT) based on relation mining is proposed for predicting the remaining useful life (RUL) of complex aviation equipment. This method starts by processing raw equipment data through a temporal extractor, capturing time-dependent patterns and inherent features. It then applies a relation mining algorithm, inspired by the Decision Making Trial and Evaluation Laboratory (DEMATEL) method, to identify multiorder coupling relationships among sensor data, creating a dynamic relationship matrix that encapsulates these temporal features. This matrix, along with temporal data, is integrated into a spatiotemporal graph neural network (GNN) for effective information fusion, emphasizing key features from both spatial and temporal domains. Compared with the state-of-the-art methods, the experimental results on the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset demonstrate the superior performance, with the root mean square error (RMSE) value improvements of 1.32%, 0.77%, 6.06%, and 14.98% across four subsets, respectively. By merging traditional DEMATEL relationship mining with GNN technology and embedding artificial intelligence within domain knowledge to model complex systems, this method accurately predicts RUL within complex aviation systems, demonstrating superior efficacy and performance. The proposed method offers significant potential for enhancing system reliability and safety in the aviation industry.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40775-40792"},"PeriodicalIF":4.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work considers optimal node pairing and channel allocation in downlink (DL) wireless sensor networks (WSNs) with multirate (MR)-nonorthogonal multiple access (NOMA). The objective is to maximize the network sum-rate and improve the IoT devices (IoDs) connectivity while satisfying the quality-of-service (QoS), bit error rate (BER) requirements. The IoD channel allocation and pairing processes are formulated as a mixed integer linear programming problem where the BER expressions are derived in closed form for the two-IoD scenario over a Nakagami-m fading channel. To solve the optimization problem, an efficient band elimination algorithm (BEA) is proposed to reduce the complexity of the branch and bound (BB) algorithm. The obtained results show that pairing IoDs with different transmission rates can improve the network sum-rate and connectivity by 26% and 39%, respectively, compared to single-symbol rate (SR)-NOMA. Moreover, in another scenario, MR-NOMA demonstrated its efficacy by achieving connectivity for all IoDs, distinctly outperforming conventional SR-NOMA, which managed to connect only 66% of the IoDs, even at high signal-to-noise ratios (SNRs). The proposed BEA technique is shown to significantly reduce the BB complexity, particularly at low SNRs where complexity reduction exceeds 90%.
{"title":"Sum-Rate Maximization of Multirate NOMA-Based WSNs","authors":"Zainab Khader;Arafat Al-Dweik;Emad Alsusa;Mohamed Abou-Khousa","doi":"10.1109/JSEN.2025.3609565","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3609565","url":null,"abstract":"This work considers optimal node pairing and channel allocation in downlink (DL) wireless sensor networks (WSNs) with multirate (MR)-nonorthogonal multiple access (NOMA). The objective is to maximize the network sum-rate and improve the IoT devices (IoDs) connectivity while satisfying the quality-of-service (QoS), bit error rate (BER) requirements. The IoD channel allocation and pairing processes are formulated as a mixed integer linear programming problem where the BER expressions are derived in closed form for the two-IoD scenario over a Nakagami-m fading channel. To solve the optimization problem, an efficient band elimination algorithm (BEA) is proposed to reduce the complexity of the branch and bound (BB) algorithm. The obtained results show that pairing IoDs with different transmission rates can improve the network sum-rate and connectivity by 26% and 39%, respectively, compared to single-symbol rate (SR)-NOMA. Moreover, in another scenario, MR-NOMA demonstrated its efficacy by achieving connectivity for all IoDs, distinctly outperforming conventional SR-NOMA, which managed to connect only 66% of the IoDs, even at high signal-to-noise ratios (SNRs). The proposed BEA technique is shown to significantly reduce the BB complexity, particularly at low SNRs where complexity reduction exceeds 90%.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 21","pages":"40806-40819"},"PeriodicalIF":4.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-16DOI: 10.1109/JSEN.2025.3608298
Haozhu Wang;Du Jiang;Juntong Yun;Li Huang;Yuanmin Xie;Baojia Chen;Meng Jia;Ying Sun
Surface electromyography (sEMG) is a promising approach for noninvasive gesture recognition in human–computer interaction and rehabilitation. However, existing high-accuracy models often incur high-computational costs, thereby limiting real-time deployment. To address this, we propose FSGR-Net, a lightweight residual network that reconstructs ResNet50 using a small-convolution stacking strategy and a Lite-Fusion Block. The Lite-Fusion Block integrates depthwise separable convolution (DSC), ghost convolution (GC), and a channel compression–expansion mechanism to reduce redundancy. In particular, a frequency-enhanced channel attention mechanism (FECAM) is introduced after DSC layers to enhance discriminative features while mitigating the Gibbs phenomenon. Furthermore, a joint data augmentation strategy—time-shifting and masking—is applied to improve generalization. Evaluations on NinaPro DB1, DB5, and our SC-Myo datasets show that FSGR-Net achieves 93.17%, 87.83%, and 93.35% accuracy, respectively, with only 0.85 M parameters and 0.22 G FLOPs, demonstrating strong potential for deployment in mobile and low-power wearable systems.
表面肌电图(sEMG)是一种很有前途的无创手势识别方法,用于人机交互和康复。然而,现有的高精度模型通常会产生高计算成本,从而限制了实时部署。为了解决这个问题,我们提出了FSGR-Net,这是一个使用小卷积堆叠策略和Lite-Fusion块重建ResNet50的轻量级残差网络。Lite-Fusion Block集成了深度可分离卷积(DSC)、幽灵卷积(GC)和信道压缩扩展机制,以减少冗余。特别是,在DSC层之后引入了频率增强通道注意机制(FECAM),以增强区分特征,同时减轻吉布斯现象。在此基础上,采用时移与掩码相结合的数据增强策略来提高泛化能力。对NinaPro DB1、DB5和SC-Myo数据集的评估表明,FSGR-Net在0.85 M参数和0.22 G FLOPs的情况下,准确率分别达到93.17%、87.83%和93.35%,显示出在移动和低功耗可穿戴系统中部署的强大潜力。
{"title":"Lightweight Gesture Recognition Based on Depthwise Separable Convolution and FECAM Attention Mechanism for sEMG","authors":"Haozhu Wang;Du Jiang;Juntong Yun;Li Huang;Yuanmin Xie;Baojia Chen;Meng Jia;Ying Sun","doi":"10.1109/JSEN.2025.3608298","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3608298","url":null,"abstract":"Surface electromyography (sEMG) is a promising approach for noninvasive gesture recognition in human–computer interaction and rehabilitation. However, existing high-accuracy models often incur high-computational costs, thereby limiting real-time deployment. To address this, we propose FSGR-Net, a lightweight residual network that reconstructs ResNet50 using a small-convolution stacking strategy and a Lite-Fusion Block. The Lite-Fusion Block integrates depthwise separable convolution (DSC), ghost convolution (GC), and a channel compression–expansion mechanism to reduce redundancy. In particular, a frequency-enhanced channel attention mechanism (FECAM) is introduced after DSC layers to enhance discriminative features while mitigating the Gibbs phenomenon. Furthermore, a joint data augmentation strategy—time-shifting and masking—is applied to improve generalization. Evaluations on NinaPro DB1, DB5, and our SC-Myo datasets show that FSGR-Net achieves 93.17%, 87.83%, and 93.35% accuracy, respectively, with only 0.85 M parameters and 0.22 G FLOPs, demonstrating strong potential for deployment in mobile and low-power wearable systems.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"25 20","pages":"39273-39281"},"PeriodicalIF":4.3,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145289547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}