在基于强化学习的多代理协作检测中学习优化状态估计

IF 7.7 2区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Mobile Computing Pub Date : 2024-08-19 DOI:10.1109/TMC.2024.3445583
Tianlong Zhou;Tianyi Shi;Hongye Gao;Weixiong Rao
{"title":"在基于强化学习的多代理协作检测中学习优化状态估计","authors":"Tianlong Zhou;Tianyi Shi;Hongye Gao;Weixiong Rao","doi":"10.1109/TMC.2024.3445583","DOIUrl":null,"url":null,"abstract":"In this paper, we study the collaborative detection problem in a multi-agent environment. By exploiting onboard range-bearing sensors, mobile agents make sequential control decisions such as moving directions to gather information of movable targets. To estimate target states, i.e., target location and velocity, the classic works such as Kalman Filter (KF) and Extended Kalman Filter (EKF) impractically assume that the underlying state space model is fully known, and some recent learning-based works, i.e., KalmanNet, estimate target states alone but without estimation uncertainty, and cannot make robust control decision. To tackle such issues, we first propose a neural network-based state estimator, namely T\n<underline>W</u>\no-phase K\n<underline>AL</u>\nma\n<underline>n</u>\n Filter with \n<underline>U</u>\nncertainty quan\n<underline>T</u>\nification (WALNUT), to explicitly give both target states and estimation uncertainty. The developed multi-agent reinforcement learning (MARL) model then takes the learned target states and uncertainty as input and makes robust actions to track movable targets. Our extensive experiments demonstrate that our work outperforms the state-of-the-art by higher tracking ability and lower localization error.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":null,"pages":null},"PeriodicalIF":7.7000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning to Optimize State Estimation in Multi-Agent Reinforcement Learning-Based Collaborative Detection\",\"authors\":\"Tianlong Zhou;Tianyi Shi;Hongye Gao;Weixiong Rao\",\"doi\":\"10.1109/TMC.2024.3445583\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we study the collaborative detection problem in a multi-agent environment. By exploiting onboard range-bearing sensors, mobile agents make sequential control decisions such as moving directions to gather information of movable targets. To estimate target states, i.e., target location and velocity, the classic works such as Kalman Filter (KF) and Extended Kalman Filter (EKF) impractically assume that the underlying state space model is fully known, and some recent learning-based works, i.e., KalmanNet, estimate target states alone but without estimation uncertainty, and cannot make robust control decision. To tackle such issues, we first propose a neural network-based state estimator, namely T\\n<underline>W</u>\\no-phase K\\n<underline>AL</u>\\nma\\n<underline>n</u>\\n Filter with \\n<underline>U</u>\\nncertainty quan\\n<underline>T</u>\\nification (WALNUT), to explicitly give both target states and estimation uncertainty. The developed multi-agent reinforcement learning (MARL) model then takes the learned target states and uncertainty as input and makes robust actions to track movable targets. Our extensive experiments demonstrate that our work outperforms the state-of-the-art by higher tracking ability and lower localization error.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10638830/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10638830/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

本文研究了多代理环境中的协同探测问题。通过利用机载测距传感器,移动代理做出顺序控制决策,如移动方向,以收集移动目标的信息。为了估计目标状态,即目标位置和速度,卡尔曼滤波器(KF)和扩展卡尔曼滤波器(EKF)等经典作品都不切实际地假设了底层状态空间模型是完全已知的,而最近一些基于学习的作品,即卡尔曼网络,只能估计目标状态,但没有估计不确定性,无法做出稳健的控制决策。针对这些问题,我们首先提出了一种基于神经网络的状态估计器,即带有不确定性量化的双相卡尔曼滤波器(WALNUT),以明确给出目标状态和估计不确定性。然后,开发的多代理强化学习(MARL)模型将学习到的目标状态和不确定性作为输入,并采取稳健的行动来跟踪移动目标。大量实验证明,我们的研究成果以更高的跟踪能力和更低的定位误差超越了最先进的研究成果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Learning to Optimize State Estimation in Multi-Agent Reinforcement Learning-Based Collaborative Detection
In this paper, we study the collaborative detection problem in a multi-agent environment. By exploiting onboard range-bearing sensors, mobile agents make sequential control decisions such as moving directions to gather information of movable targets. To estimate target states, i.e., target location and velocity, the classic works such as Kalman Filter (KF) and Extended Kalman Filter (EKF) impractically assume that the underlying state space model is fully known, and some recent learning-based works, i.e., KalmanNet, estimate target states alone but without estimation uncertainty, and cannot make robust control decision. To tackle such issues, we first propose a neural network-based state estimator, namely T W o-phase K AL ma n Filter with U ncertainty quan T ification (WALNUT), to explicitly give both target states and estimation uncertainty. The developed multi-agent reinforcement learning (MARL) model then takes the learned target states and uncertainty as input and makes robust actions to track movable targets. Our extensive experiments demonstrate that our work outperforms the state-of-the-art by higher tracking ability and lower localization error.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing 工程技术-电信学
CiteScore
12.90
自引率
2.50%
发文量
403
审稿时长
6.6 months
期刊介绍: IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.
期刊最新文献
Charger Placement with Wave Interference t-READi: Transformer-Powered Robust and Efficient Multimodal Inference for Autonomous Driving Exploitation and Confrontation: Sustainability Analysis of Crowdsourcing Bison : A Binary Sparse Network Coding based Contents Sharing Scheme for D2D-Enabled Mobile Edge Caching Network Argus: Enabling Cross-Camera Collaboration for Video Analytics on Distributed Smart Cameras
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1