Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans

Yuening Zhang, B. Williams
{"title":"Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans","authors":"Yuening Zhang, B. Williams","doi":"10.1609/icaps.v33i1.27226","DOIUrl":null,"url":null,"abstract":"When agents collaborate on a task, it is important that they have some shared mental model of the task routines -- the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.","PeriodicalId":239898,"journal":{"name":"International Conference on Automated Planning and Scheduling","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Automated Planning and Scheduling","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/icaps.v33i1.27226","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

When agents collaborate on a task, it is important that they have some shared mental model of the task routines -- the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
人-机器人团队中的适应与沟通:处理代理人对计划信念的差异
当智能体协作完成一项任务时,重要的是它们对任务例程有一些共享的心智模型——实现目标的一组可行计划。然而,在现实中,经常会出现这样的情况,即不能保证这种共享的心智模型,例如在特定的团队中,代理可能遵循不同的约定,或者只有一些代理知道偶然的约束。先前关于人-机器人团队的工作假设团队有一组共享的例程,在这种情况下会崩溃。在这项工作中,我们利用认知逻辑使智能体能够理解彼此对可行计划的信念差异,并动态地计划他们的行动以适应或沟通以解决差异。我们提出了一种形式主义,它扩展了条件谬论逻辑来描述知识库,以便显式地表示智能体对可行计划和执行状态的嵌套信念。我们提供了一种基于蒙特卡罗树搜索的在线执行算法,用于智能体计划其行动,包括通信行动,以解释计划的可行性,宣布意图,并提出问题。最后,我们评估了算法的成功率和可扩展性,并表明我们的智能体在没有共享心智模型保证的情况下可以更好地在团队中工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Fast and Robust Resource-Constrained Scheduling with Graph Neural Networks Solving the Multi-Choice Two Dimensional Shelf Strip Packing Problem with Time Windows Generalizing Action Justification and Causal Links to Policies Exact Anytime Multi-Agent Path Finding Using Branch-and-Cut-and-Price and Large Neighborhood Search A Constraint Programming Solution to the Guillotine Rectangular Cutting Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1