Pub Date : 2024-11-13DOI: 10.1109/TCYB.2024.3485230
Tianyu Liu;Lu Liu
This article investigates periodic event-triggered optimal output consensus of heterogeneous linear multiagent systems where each agent has knowledge of only its own cost function. In contrast to existing results, we consider communication delays and general strongly connected digraphs. A novel periodic event-triggered distributed control scheme is proposed, which allows asynchronous event detection and time-varying communication delays. Sufficient conditions with respect to the maximum allowable communication delays and event detection periods to achieve asymptotic optimal output consensus are established. Moreover, it is proved that the proposed periodic event-triggering mechanism can provide a positive lower bound of interevent times which is independent of the event detection period. A simulation example is provided to illustrate the effectiveness of the proposed control scheme.
{"title":"Periodic Event-Triggered Optimal Output Consensus of Heterogeneous Multiagent Systems Subject to Communication Delays","authors":"Tianyu Liu;Lu Liu","doi":"10.1109/TCYB.2024.3485230","DOIUrl":"10.1109/TCYB.2024.3485230","url":null,"abstract":"This article investigates periodic event-triggered optimal output consensus of heterogeneous linear multiagent systems where each agent has knowledge of only its own cost function. In contrast to existing results, we consider communication delays and general strongly connected digraphs. A novel periodic event-triggered distributed control scheme is proposed, which allows asynchronous event detection and time-varying communication delays. Sufficient conditions with respect to the maximum allowable communication delays and event detection periods to achieve asymptotic optimal output consensus are established. Moreover, it is proved that the proposed periodic event-triggering mechanism can provide a positive lower bound of interevent times which is independent of the event detection period. A simulation example is provided to illustrate the effectiveness of the proposed control scheme.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 1","pages":"355-368"},"PeriodicalIF":9.4,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142610700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1109/TCYB.2024.3489967
Adolfo Perrusquía;Weisi Guo
The cooperative nature of drone swarms poses risks in the smooth operation of services and the security of national facilities. The control objective of the swarm is, in most cases, occluded due to the complex behaviors observed in each drone. It is paramount to understand which is the control objective of the swarm, whilst understanding better how they communicate with each other to achieve the desired task. To solve these issues, this article proposes a physics-informed multiagent inverse reinforcement learning (PI-MAIRL) that: 1) infers the control objective function or reward function from observational data and 2) uncover the network topology by exploiting a physics-informed model of the dynamics of each drone. The combined contribution enables to understand better the behavior of the swarm, whilst enabling the inference of its objective for experience inference and imitation learning. A physically uncoupled swarm scenario is considered in this study. The incorporation of the physics-informed element allows to obtain an algorithm that is computationally more efficient than model-free IRL algorithms. Convergence of the proposed approach is verified using Lyapunov recursions on a global Riccati equation. Simulation studies are carried out to show the benefits and challenges of the approach.
{"title":"Uncovering Reward Goals in Distributed Drone Swarms Using Physics-Informed Multiagent Inverse Reinforcement Learning","authors":"Adolfo Perrusquía;Weisi Guo","doi":"10.1109/TCYB.2024.3489967","DOIUrl":"10.1109/TCYB.2024.3489967","url":null,"abstract":"The cooperative nature of drone swarms poses risks in the smooth operation of services and the security of national facilities. The control objective of the swarm is, in most cases, occluded due to the complex behaviors observed in each drone. It is paramount to understand which is the control objective of the swarm, whilst understanding better how they communicate with each other to achieve the desired task. To solve these issues, this article proposes a physics-informed multiagent inverse reinforcement learning (PI-MAIRL) that: 1) infers the control objective function or reward function from observational data and 2) uncover the network topology by exploiting a physics-informed model of the dynamics of each drone. The combined contribution enables to understand better the behavior of the swarm, whilst enabling the inference of its objective for experience inference and imitation learning. A physically uncoupled swarm scenario is considered in this study. The incorporation of the physics-informed element allows to obtain an algorithm that is computationally more efficient than model-free IRL algorithms. Convergence of the proposed approach is verified using Lyapunov recursions on a global Riccati equation. Simulation studies are carried out to show the benefits and challenges of the approach.","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 1","pages":"14-23"},"PeriodicalIF":9.4,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142610702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-13DOI: 10.1109/TCYB.2024.3487845
Fan Yang;Wenrui Chen;Haoran Lin;Sijie Wu;Xin Li;Zhiyong Li;Yaonan Wang
A primary challenge in robotic tool use is achieving precise manipulation with dexterous robotic hands to mimic human actions. It requires understanding human tool use and allocating specific functions to each robotic finger for fine control. Existing work has primarily focused on the overall grasping capabilities of robotic hands, often neglecting the functional allocation among individual fingers during object interaction. In response to this, we introduce a semantic knowledge-driven approach to distribute functions among fingers for tool manipulation. Central to this approach is the finger-to-function (F2F) knowledge graph, which captures human expertise in tool use and establishes relationships between tool attributes, tasks, and manipulation elements, including functional fingers, components, required force, and gestures. We also develop a manipulation element-oriented prediction algorithm using knowledge graph semantic embedding, enhancing the prediction of manipulation elements’ speed and accuracy. Additionally, we propose the functionality-integrated adaptive force feedback manipulation (FAFM) module, which integrates manipulation elements with adaptive force feedback to achieve precise finger-level control. Our framework does not rely on extensive annotated data for supervision but utilizes semantic constraints from F2F to guide tool manipulation. The proposed method demonstrates superior performance and generalizability in real-world scenarios, achieving an 8% higher success rate in grasping and manipulation of representative tool instances compared to the existing state-of-the-art methods. The dataset and code are available at https://github.com/yangfan293/F2F