A collaborative distributed multi-agent reinforcement learning technique for dynamic agent shortest path planning via selected sub-goals in complex cluttered environments

D. Megherbi, Minsuk Kim
{"title":"A collaborative distributed multi-agent reinforcement learning technique for dynamic agent shortest path planning via selected sub-goals in complex cluttered environments","authors":"D. Megherbi, Minsuk Kim","doi":"10.1109/COGSIMA.2015.7108185","DOIUrl":null,"url":null,"abstract":"Collaborative monitoring of large infrastructures, such as military, transportation and maritime systems are decisive issues in many surveillance, protection, and security applications. In many of these applications, dynamic multi-agent systems using reinforcement learning for agents' autonomous path planning, where agents could be moving randomly to reach their respective goals and avoiding topographical obstacles intelligently, becomes a challenging problem. This is specially so in a dynamic agent environment. In our prior work we presented an intelligent multi-agent hybrid reactive and reinforcement learning technique for collaborative autonomous agent path planning for monitoring Critical Key Infrastructures and Resources (CKIR) in a geographically and a computationally distributed systems. Here agent monitoring of large environments is reduced to monitoring of relatively smaller track-able geographically distributed agent environment regions. In this paper we tackle this problem in the challenging case of complex and cluttered environments, where agents' initial random-walk paths become challenging and relatively nonconverging. Here we propose a multi-agent distributed hybrid reactive re-enforcement learning technique based on selected agent intermediary sub-goals using a learning reward scheme in a distributed-computing memory setting. Various case study scenarios are presented for convergence study to the shortest minimum-amount-of-time exploratory steps for faster and efficient agent learning. In this work the distributed dynamic agent communication is done via a Message Passing Interface (MPI).","PeriodicalId":373467,"journal":{"name":"2015 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2015 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COGSIMA.2015.7108185","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

Collaborative monitoring of large infrastructures, such as military, transportation and maritime systems are decisive issues in many surveillance, protection, and security applications. In many of these applications, dynamic multi-agent systems using reinforcement learning for agents' autonomous path planning, where agents could be moving randomly to reach their respective goals and avoiding topographical obstacles intelligently, becomes a challenging problem. This is specially so in a dynamic agent environment. In our prior work we presented an intelligent multi-agent hybrid reactive and reinforcement learning technique for collaborative autonomous agent path planning for monitoring Critical Key Infrastructures and Resources (CKIR) in a geographically and a computationally distributed systems. Here agent monitoring of large environments is reduced to monitoring of relatively smaller track-able geographically distributed agent environment regions. In this paper we tackle this problem in the challenging case of complex and cluttered environments, where agents' initial random-walk paths become challenging and relatively nonconverging. Here we propose a multi-agent distributed hybrid reactive re-enforcement learning technique based on selected agent intermediary sub-goals using a learning reward scheme in a distributed-computing memory setting. Various case study scenarios are presented for convergence study to the shortest minimum-amount-of-time exploratory steps for faster and efficient agent learning. In this work the distributed dynamic agent communication is done via a Message Passing Interface (MPI).
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
复杂杂乱环境下基于选择子目标的动态智能体最短路径规划的协同分布式多智能体强化学习技术
军事、运输和海事系统等大型基础设施的协同监测是许多监视、保护和安全应用中的决定性问题。在许多这样的应用中,动态多智能体系统使用强化学习来实现智能体的自主路径规划,其中智能体可以随机移动以达到各自的目标并智能地避免地形障碍,这成为一个具有挑战性的问题。在动态代理环境中尤其如此。在我们之前的工作中,我们提出了一种智能多智能体混合反应和强化学习技术,用于协作自主智能体路径规划,用于监控地理和计算分布式系统中的关键基础设施和资源(CKIR)。在这里,对大型环境的代理监控被简化为对相对较小的可跟踪的地理分布代理环境区域的监控。在本文中,我们在复杂和混乱的环境中解决了这个问题,其中智能体的初始随机行走路径变得具有挑战性并且相对不收敛。本文提出了一种基于选择代理中间子目标的多代理分布式混合反应性强化学习技术,该技术采用分布式计算内存设置中的学习奖励方案。提出了各种案例研究场景,用于收敛研究,以最短的时间最短的探索步骤,以实现更快和有效的智能体学习。在这项工作中,分布式动态代理通信是通过消息传递接口(MPI)完成的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automatic derivation of context descriptions Fidelity & validity in robotic simulation A model-driven approach to the a priori estimation of operator workload Describing and reusing warfighter processes and products: an agile training framework Simulated network effects on tactical operations on decision making
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1