Ran Wang;Cheng Xu;Jing Sun;Shihong Duan;Xiaotong Zhang
{"title":"基于强化学习补偿滤波器的多代理合作定位系统","authors":"Ran Wang;Cheng Xu;Jing Sun;Shihong Duan;Xiaotong Zhang","doi":"10.1109/JSAC.2024.3414599","DOIUrl":null,"url":null,"abstract":"In modern navigation and positioning systems, accurate location information is crucial for ensuring system performance and user experience. Particularly, in scenarios involving the use of multiple agents such as robots and drones for rescue operations in unknown complex environments, accurate localization is fundamental for subsequent actions. However, traditional filtering-based localization algorithms may exhibit suboptimal performance and are sensitive to initial estimates and system noise. To address these issues, this paper proposes a multi-agent collaborative localization algorithm based on reinforcement learning compensation filtering to tackle localization problems in complex environments and improve the robustness and accuracy. Specifically, this paper introduces a value decomposition-based reinforcement learning network for filtering compensation to reduce overall localization error and address the credit allocation problem in multi-agent reinforcement learning. The main contributions of this paper are as follows: Firstly, a local localization estimation method based on reinforcement learning compensation Extended Kalman Filter (EKF) is proposed, which further corrects the results of the EKF algorithm and eliminates initial estimation errors. Secondly, a global collaborative localization estimation algorithm (MARL_CF) based on credit allocation in multi-agent reinforcement learning is proposed, which maximizes the reduction of overall localization error through information sharing and global optimization. Finally, the effectiveness of the proposed algorithms is validated through both numerical simulation and physical experiments. The results demonstrate that the proposed MARL_CF significantly improve the accuracy and robustness of localization in complex environments.","PeriodicalId":73294,"journal":{"name":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cooperative Localization for Multi-Agents Based on Reinforcement Learning Compensated Filter\",\"authors\":\"Ran Wang;Cheng Xu;Jing Sun;Shihong Duan;Xiaotong Zhang\",\"doi\":\"10.1109/JSAC.2024.3414599\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In modern navigation and positioning systems, accurate location information is crucial for ensuring system performance and user experience. Particularly, in scenarios involving the use of multiple agents such as robots and drones for rescue operations in unknown complex environments, accurate localization is fundamental for subsequent actions. However, traditional filtering-based localization algorithms may exhibit suboptimal performance and are sensitive to initial estimates and system noise. To address these issues, this paper proposes a multi-agent collaborative localization algorithm based on reinforcement learning compensation filtering to tackle localization problems in complex environments and improve the robustness and accuracy. Specifically, this paper introduces a value decomposition-based reinforcement learning network for filtering compensation to reduce overall localization error and address the credit allocation problem in multi-agent reinforcement learning. The main contributions of this paper are as follows: Firstly, a local localization estimation method based on reinforcement learning compensation Extended Kalman Filter (EKF) is proposed, which further corrects the results of the EKF algorithm and eliminates initial estimation errors. Secondly, a global collaborative localization estimation algorithm (MARL_CF) based on credit allocation in multi-agent reinforcement learning is proposed, which maximizes the reduction of overall localization error through information sharing and global optimization. Finally, the effectiveness of the proposed algorithms is validated through both numerical simulation and physical experiments. The results demonstrate that the proposed MARL_CF significantly improve the accuracy and robustness of localization in complex environments.\",\"PeriodicalId\":73294,\"journal\":{\"name\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10557677/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal on selected areas in communications : a publication of the IEEE Communications Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10557677/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cooperative Localization for Multi-Agents Based on Reinforcement Learning Compensated Filter
In modern navigation and positioning systems, accurate location information is crucial for ensuring system performance and user experience. Particularly, in scenarios involving the use of multiple agents such as robots and drones for rescue operations in unknown complex environments, accurate localization is fundamental for subsequent actions. However, traditional filtering-based localization algorithms may exhibit suboptimal performance and are sensitive to initial estimates and system noise. To address these issues, this paper proposes a multi-agent collaborative localization algorithm based on reinforcement learning compensation filtering to tackle localization problems in complex environments and improve the robustness and accuracy. Specifically, this paper introduces a value decomposition-based reinforcement learning network for filtering compensation to reduce overall localization error and address the credit allocation problem in multi-agent reinforcement learning. The main contributions of this paper are as follows: Firstly, a local localization estimation method based on reinforcement learning compensation Extended Kalman Filter (EKF) is proposed, which further corrects the results of the EKF algorithm and eliminates initial estimation errors. Secondly, a global collaborative localization estimation algorithm (MARL_CF) based on credit allocation in multi-agent reinforcement learning is proposed, which maximizes the reduction of overall localization error through information sharing and global optimization. Finally, the effectiveness of the proposed algorithms is validated through both numerical simulation and physical experiments. The results demonstrate that the proposed MARL_CF significantly improve the accuracy and robustness of localization in complex environments.