Pub Date : 2026-01-19DOI: 10.1109/tcyb.2026.3651462
Weihao Pan, Xianfu Zhang, Lu Liu, Zhiyu Duan
{"title":"Decentralized Impulsive Control for Nonlinear Interconnected Systems Based on Dynamic Event-Triggered Mechanism","authors":"Weihao Pan, Xianfu Zhang, Lu Liu, Zhiyu Duan","doi":"10.1109/tcyb.2026.3651462","DOIUrl":"https://doi.org/10.1109/tcyb.2026.3651462","url":null,"abstract":"","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"27 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146001267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1109/TCYB.2025.3650459
Lei Hao, Lina Xu, Chang Liu, Yanni Dong
Effective deep feature extraction via feature-level fusion is crucial for multimodal object detection. However, previous studies often involve complex training processes that integrate modality-specific features by stacking multiple feature-level fusion units, leading to significant computational overhead. To address this issue, we propose a lightweight attention-guided self-modulation feature fusion network (LASFNet). The LASFNet adopts a single feature-level fusion unit to enable high-performance detection, thereby simplifying the training process. The attention-guided self-modulation feature fusion (ASFF) module in the model adaptively adjusts the responses of fused features at both global and local levels, promoting comprehensive and enriched feature generation. Additionally, a lightweight feature attention transformation module (FATM) is designed at the neck of LASFNet to enhance the focus on fused features and minimize information loss. Extensive experiments on three representative datasets demonstrate that our approach achieves a favorable efficiency-accuracy tradeoff. Compared to state-of-the-art methods, LASFNet reduced the number of parameters and computational cost by as much as 90% and 85%, respectively, while improving detection accuracy mean average precision (mAP) by 1%-3%. The code will be open-sourced at https://github.com/leileilei2000/LASFNet.
{"title":"LASFNet: A Lightweight Attention-Guided Self-Modulation Feature Fusion Network for Multimodal Object Detection.","authors":"Lei Hao, Lina Xu, Chang Liu, Yanni Dong","doi":"10.1109/TCYB.2025.3650459","DOIUrl":"https://doi.org/10.1109/TCYB.2025.3650459","url":null,"abstract":"<p><p>Effective deep feature extraction via feature-level fusion is crucial for multimodal object detection. However, previous studies often involve complex training processes that integrate modality-specific features by stacking multiple feature-level fusion units, leading to significant computational overhead. To address this issue, we propose a lightweight attention-guided self-modulation feature fusion network (LASFNet). The LASFNet adopts a single feature-level fusion unit to enable high-performance detection, thereby simplifying the training process. The attention-guided self-modulation feature fusion (ASFF) module in the model adaptively adjusts the responses of fused features at both global and local levels, promoting comprehensive and enriched feature generation. Additionally, a lightweight feature attention transformation module (FATM) is designed at the neck of LASFNet to enhance the focus on fused features and minimize information loss. Extensive experiments on three representative datasets demonstrate that our approach achieves a favorable efficiency-accuracy tradeoff. Compared to state-of-the-art methods, LASFNet reduced the number of parameters and computational cost by as much as 90% and 85%, respectively, while improving detection accuracy mean average precision (mAP) by 1%-3%. The code will be open-sourced at https://github.com/leileilei2000/LASFNet.</p>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"PP ","pages":""},"PeriodicalIF":10.5,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145989039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1109/tcyb.2026.3651567
Tao Jiang,Yan Yan,Shuanghe Yu,Ge Guo
This article investigates fixed-time synchronized convergence for disturbed second-order multiagent systems (MASs) in distributed optimization under the zero-gradient-sum (ZGS) scheme. A fixed-time ZGS distributed optimization method via sliding mode is first proposed for the second-order MASs, which avoids local minimization and rejects disturbances. To further achieve time-synchronized convergence, a hierarchical robust optimization method is then introduced. It employs a time-varying function-based local-minimization-free ZGS scheme within a virtual MAS to generate a reference signal that reaches the global cost function's minimizer and a fixed-time synchronized sliding mode tracking controller to drive the original second-order MAS to track this signal. Beyond the capabilities of the first protocol, this method also ensures the time-synchronized convergence of each agent's state components, low conservatism in terms of convergence time bounds, and privacy preservation. Numerical simulations demonstrate the effectiveness of the proposed methods.
{"title":"Distributed Robust Optimization for Disturbed Multiagent Systems With Fixed-Time Synchronized Convergence.","authors":"Tao Jiang,Yan Yan,Shuanghe Yu,Ge Guo","doi":"10.1109/tcyb.2026.3651567","DOIUrl":"https://doi.org/10.1109/tcyb.2026.3651567","url":null,"abstract":"This article investigates fixed-time synchronized convergence for disturbed second-order multiagent systems (MASs) in distributed optimization under the zero-gradient-sum (ZGS) scheme. A fixed-time ZGS distributed optimization method via sliding mode is first proposed for the second-order MASs, which avoids local minimization and rejects disturbances. To further achieve time-synchronized convergence, a hierarchical robust optimization method is then introduced. It employs a time-varying function-based local-minimization-free ZGS scheme within a virtual MAS to generate a reference signal that reaches the global cost function's minimizer and a fixed-time synchronized sliding mode tracking controller to drive the original second-order MAS to track this signal. Beyond the capabilities of the first protocol, this method also ensures the time-synchronized convergence of each agent's state components, low conservatism in terms of convergence time bounds, and privacy preservation. Numerical simulations demonstrate the effectiveness of the proposed methods.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"390 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1109/tcyb.2026.3650844
Longnan Li,Shaofan Guo,Lanyong Zhang,Chenguang Yang
In this study, we develop an event-triggered predefined-time sensorless prescribed and personalized compliant performance control scheme for teleoperation systems. In the absence of force/torque sensors, a predefined-time torque behavior estimator (PTTBE) is designed, and its estimated values are applied to both the admittance structure and the control law. Then, a variable stiffness parameter related to the operator's surface electromyography (sEMG) signal is incorporated into the admittance structure. By integrating the PTTBE, predefined-time sliding manifold, predefined-time performance function, and event-triggered mechanism involving time-scaling, error-scaling, and muscle activation-scaling functions, the PTTBE-based event-triggered predefined-time control (PTTBE-ETPTC) scheme is proposed. This scheme ensures that not only does the tracking error converge to a residual set within a predefined time regardless of the system's initial state, but also that the error constraints are not violated at any time. Compared with existing tracking control methods, the introduction of a variable stiffness parameter admittance structure, along with an event-triggered mechanism related to predefined-time parameters and a variable capable of reflecting the operator's intention, greatly enhances the system's flexibility, enabling a favorable balance between tracking performance for free motion and compliant performance for interaction/contact situations while reducing the control frequency. Simulations and experiments are carried out to demonstrate the effectiveness and practicality of the developed PTTBE-ETPTC scheme.
{"title":"Event-Triggered Predefined-Time Sensorless Prescribed and Personalized Compliant Performance Control for Teleoperation Systems.","authors":"Longnan Li,Shaofan Guo,Lanyong Zhang,Chenguang Yang","doi":"10.1109/tcyb.2026.3650844","DOIUrl":"https://doi.org/10.1109/tcyb.2026.3650844","url":null,"abstract":"In this study, we develop an event-triggered predefined-time sensorless prescribed and personalized compliant performance control scheme for teleoperation systems. In the absence of force/torque sensors, a predefined-time torque behavior estimator (PTTBE) is designed, and its estimated values are applied to both the admittance structure and the control law. Then, a variable stiffness parameter related to the operator's surface electromyography (sEMG) signal is incorporated into the admittance structure. By integrating the PTTBE, predefined-time sliding manifold, predefined-time performance function, and event-triggered mechanism involving time-scaling, error-scaling, and muscle activation-scaling functions, the PTTBE-based event-triggered predefined-time control (PTTBE-ETPTC) scheme is proposed. This scheme ensures that not only does the tracking error converge to a residual set within a predefined time regardless of the system's initial state, but also that the error constraints are not violated at any time. Compared with existing tracking control methods, the introduction of a variable stiffness parameter admittance structure, along with an event-triggered mechanism related to predefined-time parameters and a variable capable of reflecting the operator's intention, greatly enhances the system's flexibility, enabling a favorable balance between tracking performance for free motion and compliant performance for interaction/contact situations while reducing the control frequency. Simulations and experiments are carried out to demonstrate the effectiveness and practicality of the developed PTTBE-ETPTC scheme.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"141 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1109/tcyb.2026.3650942
Dongxu Ma,Chao Zhang,Guanghui Zhou,Chenchu Ma
Human-robot collaboration (HRC) assembly necessitates precise mutual cognition to guarantee safe and efficient execution. In this context, human assembly intention recognition (HAIR) serves as a critical approach to achieving this mutual understanding. However, most current HAIR approaches struggle to extract sufficient spatiotemporal information from limited industrial data, particularly under complex conditions like varying scales and visual occlusions. Thereby, this article proposes an ensemble encoder approach to extract and fuse spatial and temporal features from visual and skeleton streams of the HRC assembly process, thus significantly improving HAIR accuracy and efficiency. First, an RGB feature extraction encoder is designed to model spatiotemporal dependencies of the assembly process with different scales of features from flexible input RGB encoders (RGBEs). Distinctively, a cross-attention module is utilized to fuse information from different-scale RGBEs, ensuring comprehensive assembly action representation with different granularities. Second, to address the occlusion challenge, a mask-aware skeleton feature extraction encoder is devised. By utilizing frame and joint masking strategies, it robustly models the relationship between operator pose evolution and assembly actions, maintaining high performance even under occlusion. Third, a global feature fusion encoder integrates and aligns features from RGB and skeleton feature extraction encoders. Experimental results demonstrate the state-of-the-art performance of the proposed approach, which achieves the highest accuracy of 99.12%, 99.23%, and 84.59% on MCV-Intention, HA4M, and HA-VID datasets, respectively. Six ablation studies demonstrate the performance effects of fusion positions, the number of depth channels, cross-attention fusion module, occlusions, illuminations, and computational efficiency.
{"title":"Ensemble Encoder-Enabled Proactive Human Assembly Intention Recognition With Multimodal and Flexible Scale Data.","authors":"Dongxu Ma,Chao Zhang,Guanghui Zhou,Chenchu Ma","doi":"10.1109/tcyb.2026.3650942","DOIUrl":"https://doi.org/10.1109/tcyb.2026.3650942","url":null,"abstract":"Human-robot collaboration (HRC) assembly necessitates precise mutual cognition to guarantee safe and efficient execution. In this context, human assembly intention recognition (HAIR) serves as a critical approach to achieving this mutual understanding. However, most current HAIR approaches struggle to extract sufficient spatiotemporal information from limited industrial data, particularly under complex conditions like varying scales and visual occlusions. Thereby, this article proposes an ensemble encoder approach to extract and fuse spatial and temporal features from visual and skeleton streams of the HRC assembly process, thus significantly improving HAIR accuracy and efficiency. First, an RGB feature extraction encoder is designed to model spatiotemporal dependencies of the assembly process with different scales of features from flexible input RGB encoders (RGBEs). Distinctively, a cross-attention module is utilized to fuse information from different-scale RGBEs, ensuring comprehensive assembly action representation with different granularities. Second, to address the occlusion challenge, a mask-aware skeleton feature extraction encoder is devised. By utilizing frame and joint masking strategies, it robustly models the relationship between operator pose evolution and assembly actions, maintaining high performance even under occlusion. Third, a global feature fusion encoder integrates and aligns features from RGB and skeleton feature extraction encoders. Experimental results demonstrate the state-of-the-art performance of the proposed approach, which achieves the highest accuracy of 99.12%, 99.23%, and 84.59% on MCV-Intention, HA4M, and HA-VID datasets, respectively. Six ablation studies demonstrate the performance effects of fusion positions, the number of depth channels, cross-attention fusion module, occlusions, illuminations, and computational efficiency.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"29 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1109/tcyb.2025.3650263
Ran Shi,Hai-Tao Zhang,Jun Wang
This article addresses a collective heterogeneous multiagent pursuit-evasion (MPE) game problem where pursuers cooperatively capture escaping evaders. The analytical challenge of the present design lies in solving the associated coupled Hamilton-Jacobi-Isaacs (HJI) equations induced by the additional interacting roles in the MPE game while ensuring the achievement of the Nash equilibrium. To tackle this issue, a gaming framework is accordingly proposed to solve the coupled HJI equations. Sufficient conditions are derived to guarantee both the capturability and Nash equilibrium of the proposed collective MPE gaming scheme. Finally, numerical simulations are conducted to verify the effectiveness of the present MPE gaming strategy.
{"title":"Distributed Capturing Strategy in Heterogeneous Multiagent Pursuit-Evasion Games.","authors":"Ran Shi,Hai-Tao Zhang,Jun Wang","doi":"10.1109/tcyb.2025.3650263","DOIUrl":"https://doi.org/10.1109/tcyb.2025.3650263","url":null,"abstract":"This article addresses a collective heterogeneous multiagent pursuit-evasion (MPE) game problem where pursuers cooperatively capture escaping evaders. The analytical challenge of the present design lies in solving the associated coupled Hamilton-Jacobi-Isaacs (HJI) equations induced by the additional interacting roles in the MPE game while ensuring the achievement of the Nash equilibrium. To tackle this issue, a gaming framework is accordingly proposed to solve the coupled HJI equations. Sufficient conditions are derived to guarantee both the capturability and Nash equilibrium of the proposed collective MPE gaming scheme. Finally, numerical simulations are conducted to verify the effectiveness of the present MPE gaming strategy.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"29 1","pages":""},"PeriodicalIF":11.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145971922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}