Embodied Multi-Agent Task Planning from Ambiguous Instruction

Xinzhu Liu, Xinghang Li, Di Guo, Sinan Tan, Huaping Liu, F. Sun
{"title":"Embodied Multi-Agent Task Planning from Ambiguous Instruction","authors":"Xinzhu Liu, Xinghang Li, Di Guo, Sinan Tan, Huaping Liu, F. Sun","doi":"10.15607/rss.2022.xviii.032","DOIUrl":null,"url":null,"abstract":"—In human-robots collaboration scenarios, a human would give robots an instruction that is intuitive for the human himself to accomplish. However, the instruction given to robots is likely ambiguous for them to understand as some information is implicit in the instruction. Therefore, it is necessary for the robots to jointly reason the operation details and perform the embodied multi-agent task planning given the ambiguous instruction. This problem exhibits significant challenges in both language understanding and dynamic task planning with the perception information. In this work, an embodied multi-agent task planning framework is proposed to utilize external knowledge sources and dynamically perceived visual information to resolve the high-level instructions, and dynamically allocate the decomposed tasks to multiple agents. Furthermore, we utilize the semantic information to perform environment perception and generate sub-goals to achieve the navigation motion. This model effectively bridges the difference between the simulation environment and the physical environment, thus it can be simultaneously applied in both simulation and physical scenarios and avoid the notori- ous sim2real problem. Finally, we build a benchmark dataset to validate the embodied multi-agent task planning problem, which includes three types of high-level instructions in which some target objects are implicit in instructions. We perform the evaluation experiments on the simulation platform and in physical scenarios, demonstrating that the proposed model can achieve promising results for multi-agent collaborative tasks.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Robotics: Science and Systems XVIII","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15607/rss.2022.xviii.032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

—In human-robots collaboration scenarios, a human would give robots an instruction that is intuitive for the human himself to accomplish. However, the instruction given to robots is likely ambiguous for them to understand as some information is implicit in the instruction. Therefore, it is necessary for the robots to jointly reason the operation details and perform the embodied multi-agent task planning given the ambiguous instruction. This problem exhibits significant challenges in both language understanding and dynamic task planning with the perception information. In this work, an embodied multi-agent task planning framework is proposed to utilize external knowledge sources and dynamically perceived visual information to resolve the high-level instructions, and dynamically allocate the decomposed tasks to multiple agents. Furthermore, we utilize the semantic information to perform environment perception and generate sub-goals to achieve the navigation motion. This model effectively bridges the difference between the simulation environment and the physical environment, thus it can be simultaneously applied in both simulation and physical scenarios and avoid the notori- ous sim2real problem. Finally, we build a benchmark dataset to validate the embodied multi-agent task planning problem, which includes three types of high-level instructions in which some target objects are implicit in instructions. We perform the evaluation experiments on the simulation platform and in physical scenarios, demonstrating that the proposed model can achieve promising results for multi-agent collaborative tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于模糊指令的多智能体任务规划
在人-机器人协作的场景中,人类会给机器人一个指令,而这个指令是人类自己能够直观地完成的。然而,给机器人的指令可能是模棱两可的,因为一些信息隐含在指令中。因此,在存在歧义指令的情况下,机器人有必要共同推理操作细节并执行具身的多智能体任务规划。这一问题在语言理解和基于感知信息的动态任务规划方面都面临着重大挑战。提出了一种嵌入式多智能体任务规划框架,利用外部知识来源和动态感知的视觉信息来解析高级指令,并将分解后的任务动态分配给多个智能体。利用语义信息进行环境感知,生成子目标,实现导航运动。该模型有效地弥合了仿真环境和物理环境之间的差异,从而可以同时应用于仿真和物理场景,避免了众所周知的sim2real问题。最后,我们建立了一个基准数据集来验证嵌入的多智能体任务规划问题,该问题包括三种类型的高级指令,其中一些目标对象隐含在指令中。我们在仿真平台和物理场景上进行了评估实验,证明了所提出的模型在多智能体协作任务中取得了令人满意的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Underwater Robot-To-Human Communication Via Motion: Implementation and Full-Loop Human Interface Evaluation Meta Value Learning for Fast Policy-Centric Optimal Motion Planning A Learning-based Iterative Control Framework for Controlling a Robot Arm with Pneumatic Artificial Muscles Aerial Layouting: Design and Control of a Compliant and Actuated End-Effector for Precise In-flight Marking on Ceilings Occupancy-SLAM: Simultaneously Optimizing Robot Poses and Continuous Occupancy Map
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1