首页 > 最新文献

Autonomous Robots最新文献

英文 中文
Dynamic task allocation approaches for coordinated exploration of Subterranean environments 地下环境协同勘探的动态任务分配方法
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-23 DOI: 10.1007/s10514-023-10142-4
Matthew O’Brien, Jason Williams, Shengkang Chen, Alex Pitt, Ronald Arkin, Navinda Kottege

This paper presents the methods used by team CSIRO Data61 for multi-agent coordination and exploration in the DARPA Subterranean (SubT) Challenge. The SubT competition involved a single operator sending teams of robots to rapidly explore underground environments with severe navigation and communication challenges. Coordination was framed as a multi-robot task allocation (MRTA) problem to allow for a seamless integration of exploration with other required tasks. Methods for extending a consensus-based task allocation approach for an online and highly dynamic mission are discussed. Exploration tasks were generated from frontiers in a map of traversable space, and graph-based heuristics applied to guide the selection of exploration tasks. Results from simulation, field testing, and the final competition are presented. Team CSIRO Data61 tied for most points scored and achieved second place during the final SubT event.

本文介绍了CSIRO Data61团队在DARPA地下(SubT)挑战赛中用于多智能体协调和探索的方法。SubT竞赛涉及单个操作员派遣机器人团队快速探索具有严峻导航和通信挑战的地下环境。协调被框架为一个多机器人任务分配(MRTA)问题,以允许探索与其他所需任务的无缝集成。讨论了将基于共识的任务分配方法扩展到在线高动态任务的方法。从可穿越空间地图的边界生成探索任务,并应用基于图的启发式方法指导探索任务的选择。给出了仿真、现场测试和决赛的结果。CSIRO Data61队在最后的SubT赛事中获得了最多的得分并获得了第二名。
{"title":"Dynamic task allocation approaches for coordinated exploration of Subterranean environments","authors":"Matthew O’Brien,&nbsp;Jason Williams,&nbsp;Shengkang Chen,&nbsp;Alex Pitt,&nbsp;Ronald Arkin,&nbsp;Navinda Kottege","doi":"10.1007/s10514-023-10142-4","DOIUrl":"10.1007/s10514-023-10142-4","url":null,"abstract":"<div><p>This paper presents the methods used by team CSIRO Data61 for multi-agent coordination and exploration in the DARPA Subterranean (SubT) Challenge. The SubT competition involved a single operator sending teams of robots to rapidly explore underground environments with severe navigation and communication challenges. Coordination was framed as a multi-robot task allocation (MRTA) problem to allow for a seamless integration of exploration with other required tasks. Methods for extending a consensus-based task allocation approach for an online and highly dynamic mission are discussed. Exploration tasks were generated from frontiers in a map of traversable space, and graph-based heuristics applied to guide the selection of exploration tasks. Results from simulation, field testing, and the final competition are presented. Team CSIRO Data61 tied for most points scored and achieved second place during the final SubT event.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1559 - 1577"},"PeriodicalIF":3.5,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AuRo special issue on large language models in robotics guest editorial AuRo关于机器人中的大型语言模型的特刊客座编辑
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-17 DOI: 10.1007/s10514-023-10153-1
{"title":"AuRo special issue on large language models in robotics guest editorial","authors":"","doi":"10.1007/s10514-023-10153-1","DOIUrl":"10.1007/s10514-023-10153-1","url":null,"abstract":"","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"979 - 980"},"PeriodicalIF":3.5,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TidyBot: personalized robot assistance with large language models TidyBot:具有大型语言模型的个性化机器人辅助
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-16 DOI: 10.1007/s10514-023-10139-z
Jimmy Wu, Rika Antonova, Adam Kan, Marion Lepert, Andy Zeng, Shuran Song, Jeannette Bohg, Szymon Rusinkiewicz, Thomas Funkhouser

For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by picking up objects and putting them away. A key challenge is determining the proper place to put each object, as people’s preferences can vary greatly depending on personal taste or cultural background. For instance, one person may prefer storing shirts in the drawer, while another may prefer them on the shelf. We aim to build systems that can learn such preferences from just a handful of examples via prior interactions with a particular person. We show that robots can combine language-based planning and perception with the few-shot summarization capabilities of large language models to infer generalized user preferences that are broadly applicable to future interactions. This approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts away 85.0% of objects in real-world test scenarios.

为了让机器人有效地个性化物理辅助,它必须了解用户的偏好,这些偏好通常可以在未来的场景中重新应用。在这项工作中,我们研究了家庭清洁的个性化,机器人可以通过捡起物体并把它们放好来清理房间。一个关键的挑战是确定每件物品的合适放置位置,因为人们的偏好可能因个人品味或文化背景而有很大差异。例如,一个人可能喜欢把衬衫放在抽屉里,而另一个人可能喜欢把它们放在架子上。我们的目标是建立一个系统,可以通过与特定的人之前的互动,从少数例子中学习这种偏好。我们表明,机器人可以将基于语言的规划和感知与大型语言模型的少量汇总能力相结合,以推断广泛适用于未来交互的广义用户偏好。该方法实现了快速自适应,并在基准数据集中对未见对象实现了91.2%的准确率。我们还在一个名为TidyBot的真实世界的移动机械手上展示了我们的方法,它在真实世界的测试场景中成功地收起了85.0%的物体。
{"title":"TidyBot: personalized robot assistance with large language models","authors":"Jimmy Wu,&nbsp;Rika Antonova,&nbsp;Adam Kan,&nbsp;Marion Lepert,&nbsp;Andy Zeng,&nbsp;Shuran Song,&nbsp;Jeannette Bohg,&nbsp;Szymon Rusinkiewicz,&nbsp;Thomas Funkhouser","doi":"10.1007/s10514-023-10139-z","DOIUrl":"10.1007/s10514-023-10139-z","url":null,"abstract":"<div><p>For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by picking up objects and putting them away. A key challenge is determining the proper place to put each object, as people’s preferences can vary greatly depending on personal taste or cultural background. For instance, one person may prefer storing shirts in the drawer, while another may prefer them on the shelf. We aim to build systems that can learn such preferences from just a handful of examples via prior interactions with a particular person. We show that robots can combine language-based planning and perception with the few-shot summarization capabilities of large language models to infer generalized user preferences that are broadly applicable to future interactions. This approach enables fast adaptation and achieves 91.2% accuracy on unseen objects in our benchmark dataset. We also demonstrate our approach on a real-world mobile manipulator called TidyBot, which successfully puts away 85.0% of objects in real-world test scenarios.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1087 - 1102"},"PeriodicalIF":3.5,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Learning to summarize and answer questions about a virtual robot’s past actions 学习总结和回答关于虚拟机器人过去行为的问题
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-16 DOI: 10.1007/s10514-023-10134-4
Chad DeChant, Iretiayo Akinola, Daniel Bauer

When robots perform long action sequences, users will want to easily and reliably find out what they have done. We therefore demonstrate the task of learning to summarize and answer questions about a robot agent’s past actions using natural language alone. A single system with a large language model at its core is trained to both summarize and answer questions about action sequences given ego-centric video frames of a virtual robot and a question prompt. To enable training of question answering, we develop a method to automatically generate English-language questions and answers about objects, actions, and the temporal order in which actions occurred during episodes of robot action in the virtual environment. Training one model to both summarize and answer questions enables zero-shot transfer of representations of objects learned through question answering to improved action summarization.

当机器人执行长动作序列时,用户会想要轻松可靠地找出它们做了什么。因此,我们展示了学习的任务,即仅使用自然语言来总结和回答关于机器人代理过去行为的问题。一个以大型语言模型为核心的单一系统被训练来总结和回答关于动作序列的问题,给定一个虚拟机器人的以自我为中心的视频帧和一个问题提示。为了实现问题回答的训练,我们开发了一种方法来自动生成关于对象、动作和虚拟环境中机器人动作期间动作发生的时间顺序的英语问题和答案。训练一个模型来总结和回答问题,可以将通过回答问题学习到的对象的表示零概率转移到改进的动作总结。
{"title":"Learning to summarize and answer questions about a virtual robot’s past actions","authors":"Chad DeChant,&nbsp;Iretiayo Akinola,&nbsp;Daniel Bauer","doi":"10.1007/s10514-023-10134-4","DOIUrl":"10.1007/s10514-023-10134-4","url":null,"abstract":"<div><p>When robots perform long action sequences, users will want to easily and reliably find out what they have done. We therefore demonstrate the task of learning to summarize and answer questions about a robot agent’s past actions using natural language alone. A single system with a large language model at its core is trained to both summarize and answer questions about action sequences given ego-centric video frames of a virtual robot and a question prompt. To enable training of question answering, we develop a method to automatically generate English-language questions and answers about objects, actions, and the temporal order in which actions occurred during episodes of robot action in the virtual environment. Training one model to both summarize and answer questions enables zero-shot transfer of representations of objects learned through question answering to improved action summarization. \u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1103 - 1118"},"PeriodicalIF":3.5,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10134-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138473077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text2Motion: from natural language instructions to feasible plans Text2Motion:从自然语言指令到可行的计划
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-14 DOI: 10.1007/s10514-023-10131-7
Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, Jeannette Bohg

We propose Text2Motion, a language-based planning framework enabling robots to solve sequential manipulation tasks that require long-horizon reasoning. Given a natural language instruction, our framework constructs both a task- and motion-level plan that is verified to reach inferred symbolic goals. Text2Motion uses feasibility heuristics encoded in Q-functions of a library of skills to guide task planning with Large Language Models. Whereas previous language-based planners only consider the feasibility of individual skills, Text2Motion actively resolves geometric dependencies spanning skill sequences by performing geometric feasibility planning during its search. We evaluate our method on a suite of problems that require long-horizon reasoning, interpretation of abstract goals, and handling of partial affordance perception. Our experiments show that Text2Motion can solve these challenging problems with a success rate of 82%, while prior state-of-the-art language-based planning methods only achieve 13%. Text2Motion thus provides promising generalization characteristics to semantically diverse sequential manipulation tasks with geometric dependencies between skills. Qualitative results are made available at https://sites.google.com/stanford.edu/text2motion.

我们提出Text2Motion,一个基于语言的规划框架,使机器人能够解决需要长期推理的顺序操作任务。给定一个自然语言指令,我们的框架构建了一个任务级和动作级计划,该计划被验证以达到推断的符号目标。Text2Motion使用在技能库的q函数中编码的可行性启发式来指导大型语言模型的任务规划。以前基于语言的规划器只考虑单个技能的可行性,而Text2Motion通过在搜索过程中执行几何可行性规划,主动解决跨越技能序列的几何依赖性。我们在一系列问题上评估了我们的方法,这些问题需要长期的推理,抽象目标的解释,以及部分可视性感知的处理。我们的实验表明,Text2Motion可以以82%的成功率解决这些具有挑战性的问题,而之前最先进的基于语言的规划方法只有13%的成功率。因此,Text2Motion为技能之间具有几何依赖性的语义多样的顺序操作任务提供了有希望的泛化特征。定性结果可在https://sites.google.com/stanford.edu/text2motion查阅。
{"title":"Text2Motion: from natural language instructions to feasible plans","authors":"Kevin Lin,&nbsp;Christopher Agia,&nbsp;Toki Migimatsu,&nbsp;Marco Pavone,&nbsp;Jeannette Bohg","doi":"10.1007/s10514-023-10131-7","DOIUrl":"10.1007/s10514-023-10131-7","url":null,"abstract":"<div><p>We propose Text2Motion, a language-based planning framework enabling robots to solve sequential manipulation tasks that require long-horizon reasoning. Given a natural language instruction, our framework constructs both a task- and motion-level plan that is verified to reach inferred symbolic goals. Text2Motion uses feasibility heuristics encoded in Q-functions of a library of skills to guide task planning with Large Language Models. Whereas previous language-based planners only consider the feasibility of individual skills, Text2Motion actively resolves geometric dependencies spanning skill sequences by performing geometric feasibility planning during its search. We evaluate our method on a suite of problems that require long-horizon reasoning, interpretation of abstract goals, and handling of partial affordance perception. Our experiments show that Text2Motion can solve these challenging problems with a success rate of 82%, while prior state-of-the-art language-based planning methods only achieve 13%. Text2Motion thus provides promising generalization characteristics to semantically diverse sequential manipulation tasks with geometric dependencies between skills. Qualitative results are made available at https://sites.google.com/stanford.edu/text2motion.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1345 - 1365"},"PeriodicalIF":3.5,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134954182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Correction: Efficiently exploring for human robot interaction: partially observable Poisson processes 更正:有效探索人机交互:部分可观察的泊松过程
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-11 DOI: 10.1007/s10514-023-10152-2
Ferdian Jovan, Milan Tomy, Nick Hawes, Jeremy Wyatt
{"title":"Correction: Efficiently exploring for human robot interaction: partially observable Poisson processes","authors":"Ferdian Jovan,&nbsp;Milan Tomy,&nbsp;Nick Hawes,&nbsp;Jeremy Wyatt","doi":"10.1007/s10514-023-10152-2","DOIUrl":"10.1007/s10514-023-10152-2","url":null,"abstract":"","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1593 - 1593"},"PeriodicalIF":3.5,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10152-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135042744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editor’s note - Special issue on Robot Swarms in the Real World: from Design to Deployment 编者按-关于现实世界中的机器人群的特刊:从设计到部署
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-09 DOI: 10.1007/s10514-023-10151-3
{"title":"Editor’s note - Special issue on Robot Swarms in the Real World: from Design to Deployment","authors":"","doi":"10.1007/s10514-023-10151-3","DOIUrl":"10.1007/s10514-023-10151-3","url":null,"abstract":"","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"831 - 831"},"PeriodicalIF":3.5,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135192291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SpaTiaL: monitoring and planning of robotic tasks using spatio-temporal logic specifications 空间:利用时空逻辑规范监测和规划机器人任务
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-03 DOI: 10.1007/s10514-023-10145-1
Christian Pek, Georg Friedrich Schuppe, Francesco Esposito, Jana Tumova, Danica Kragic

Many tasks require robots to manipulate objects while satisfying a complex interplay of spatial and temporal constraints. For instance, a table setting robot first needs to place a mug and then fill it with coffee, while satisfying spatial relations such as forks need to placed left of plates. We propose the spatio-temporal framework SpaTiaL that unifies the specification, monitoring, and planning of object-oriented robotic tasks in a robot-agnostic fashion. SpaTiaL is able to specify diverse spatial relations between objects and temporal task patterns. Our experiments with recorded data, simulations, and real robots demonstrate how SpaTiaL provides real-time monitoring and facilitates online planning. SpaTiaL is open source and easily expandable to new object relations and robotic applications.

许多任务要求机器人在满足空间和时间约束的复杂相互作用的同时操纵物体。比如,摆桌机器人首先要把杯子放好,然后再往杯子里倒咖啡,而满足空间关系,比如叉子需要放在盘子的左边。我们提出了一个时空框架SpaTiaL,它以一种机器人不可知的方式统一了面向对象机器人任务的规范、监控和规划。空间能够指定对象和时间任务模式之间的各种空间关系。我们通过记录数据、模拟和真实机器人进行的实验证明了SpaTiaL如何提供实时监控和促进在线规划。SpaTiaL是开源的,可以很容易地扩展到新的对象关系和机器人应用程序。
{"title":"SpaTiaL: monitoring and planning of robotic tasks using spatio-temporal logic specifications","authors":"Christian Pek,&nbsp;Georg Friedrich Schuppe,&nbsp;Francesco Esposito,&nbsp;Jana Tumova,&nbsp;Danica Kragic","doi":"10.1007/s10514-023-10145-1","DOIUrl":"10.1007/s10514-023-10145-1","url":null,"abstract":"<div><p>Many tasks require robots to manipulate objects while satisfying a complex interplay of spatial and temporal constraints. For instance, a table setting robot first needs to place a mug and then fill it with coffee, while satisfying spatial relations such as forks need to placed left of plates. We propose the spatio-temporal framework SpaTiaL that unifies the specification, monitoring, and planning of object-oriented robotic tasks in a robot-agnostic fashion. SpaTiaL is able to specify diverse spatial relations between objects and temporal task patterns. Our experiments with recorded data, simulations, and real robots demonstrate how SpaTiaL provides real-time monitoring and facilitates online planning. SpaTiaL is open source and easily expandable to new object relations and robotic applications.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1439 - 1462"},"PeriodicalIF":3.5,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10145-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135820615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-robot geometric task-and-motion planning for collaborative manipulation tasks 多机器人协同操作任务的几何任务与运动规划
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-30 DOI: 10.1007/s10514-023-10148-y
Hejia Zhang, Shao-Hung Chan, Jie Zhong, Jiaoyang Li, Peter Kolapo, Sven Koenig, Zach Agioutantis, Steven Schafrik, Stefanos Nikolaidis

We address multi-robot geometric task-and-motion planning (MR-GTAMP) problems in synchronous, monotone setups. The goal of the MR-GTAMP problem is to move objects with multiple robots to goal regions in the presence of other movable objects. We focus on collaborative manipulation tasks where the robots have to adopt intelligent collaboration strategies to be successful and effective, i.e., decide which robot should move which objects to which positions, and perform collaborative actions, such as handovers. To endow robots with these collaboration capabilities, we propose to first collect occlusion and reachability information for each robot by calling motion-planning algorithms. We then propose a method that uses the collected information to build a graph structure which captures the precedence of the manipulations of different objects and supports the implementation of a mixed-integer program to guide the search for highly effective collaborative task-and-motion plans. The search process for collaborative task-and-motion plans is based on a Monte-Carlo Tree Search (MCTS) exploration strategy to achieve exploration-exploitation balance. We evaluate our framework in two challenging MR-GTAMP domains and show that it outperforms two state-of-the-art baselines with respect to the planning time, the resulting plan length and the number of objects moved. We also show that our framework can be applied to underground mining operations where a robotic arm needs to coordinate with an autonomous roof bolter. We demonstrate plan execution in two roof-bolting scenarios both in simulation and on robots.

我们解决了同步、单调设置中的多机器人几何任务和运动规划(MR-GTAMP)问题。MR-GTAMP问题的目标是在其他可移动物体存在的情况下,将多个机器人的物体移动到目标区域。我们专注于协作操作任务,其中机器人必须采用智能协作策略才能成功和有效,即决定哪个机器人应该将哪个对象移动到哪个位置,并执行协作动作,例如移交。为了赋予机器人这些协作能力,我们建议首先通过调用运动规划算法来收集每个机器人的遮挡和可达性信息。然后,我们提出了一种方法,该方法使用收集到的信息来构建一个图结构,该图结构捕获了不同对象的操作优先级,并支持混合整数程序的实现,以指导搜索高效的协同任务和运动计划。协同任务-运动计划的搜索过程基于蒙特卡罗树搜索(MCTS)搜索策略,以实现勘探-开发平衡。我们在两个具有挑战性的MR-GTAMP域中评估了我们的框架,并表明它在规划时间、最终计划长度和移动对象数量方面优于两个最先进的基线。我们还表明,我们的框架可以应用于地下采矿作业,在地下采矿作业中,机械臂需要与自动锚固机协调。我们在模拟和机器人上演示了两种屋顶锚固方案的计划执行。
{"title":"Multi-robot geometric task-and-motion planning for collaborative manipulation tasks","authors":"Hejia Zhang,&nbsp;Shao-Hung Chan,&nbsp;Jie Zhong,&nbsp;Jiaoyang Li,&nbsp;Peter Kolapo,&nbsp;Sven Koenig,&nbsp;Zach Agioutantis,&nbsp;Steven Schafrik,&nbsp;Stefanos Nikolaidis","doi":"10.1007/s10514-023-10148-y","DOIUrl":"10.1007/s10514-023-10148-y","url":null,"abstract":"<div><p>We address multi-robot geometric task-and-motion planning (MR-GTAMP) problems in <i>synchronous</i>, <i>monotone</i> setups. The goal of the MR-GTAMP problem is to move objects with multiple robots to goal regions in the presence of other movable objects. We focus on collaborative manipulation tasks where the robots have to adopt intelligent collaboration strategies to be successful and effective, i.e., decide which robot should move which objects to which positions, and perform collaborative actions, such as handovers. To endow robots with these collaboration capabilities, we propose to first collect occlusion and reachability information for each robot by calling motion-planning algorithms. We then propose a method that uses the collected information to build a graph structure which captures the precedence of the manipulations of different objects and supports the implementation of a mixed-integer program to guide the search for highly effective collaborative task-and-motion plans. The search process for collaborative task-and-motion plans is based on a Monte-Carlo Tree Search (MCTS) exploration strategy to achieve exploration-exploitation balance. We evaluate our framework in two challenging MR-GTAMP domains and show that it outperforms two state-of-the-art baselines with respect to the planning time, the resulting plan length and the number of objects moved. We also show that our framework can be applied to underground mining operations where a robotic arm needs to coordinate with an autonomous roof bolter. We demonstrate plan execution in two roof-bolting scenarios both in simulation and on robots.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1537 - 1558"},"PeriodicalIF":3.5,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10148-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136022819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised dissimilarity-based fault detection method for autonomous mobile robots 基于非监督差异性的自主移动机器人故障检测方法
IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-10-28 DOI: 10.1007/s10514-023-10144-2
Mahmut Kasap, Metin Yılmaz, Eyüp Çinar, Ahmet Yazıcı

Autonomous robots are one of the critical components in modern manufacturing systems. For this reason, the uninterrupted operation of robots in manufacturing is important for the sustainability of autonomy. Detecting possible fault symptoms that may cause failures within a work environment will help to eliminate interrupted operations. When supervised learning methods are considered, obtaining and storing labeled, historical training data in a manufacturing environment with faults is a challenging task. In addition, sensors in mobile devices such as robots are exposed to different noisy external conditions in production environments affecting data labels and fault mapping. Furthermore, relying on a single sensor data for fault detection often causes false alarms for equipment monitoring. Our study takes requirements into consideration and proposes a new unsupervised machine-learning algorithm to detect possible operational faults encountered by autonomous mobile robots. The method suggests using an ensemble of multi-sensor information fusion at the decision level by voting to enhance decision reliability. The proposed technique relies on dissimilarity-based sensor data segmentation with an adaptive threshold control. It has been tested experimentally on an autonomous mobile robot. The experimental results show that the proposed method is effective for detecting operational anomalies. Furthermore, the proposed voting mechanism is also capable of eliminating false positives in case of a single source of information is utilized.

自主机器人是现代制造系统的重要组成部分之一。因此,机器人在制造业中的不间断运行对于自主性的可持续性至关重要。检测工作环境中可能导致故障的故障症状将有助于消除中断的操作。当考虑监督学习方法时,在具有故障的制造环境中获取和存储标记的历史训练数据是一项具有挑战性的任务。此外,移动设备(如机器人)中的传感器在生产环境中暴露于不同的嘈杂外部条件下,影响数据标签和故障映射。此外,依靠单个传感器数据进行故障检测往往会导致设备监控的误报。我们的研究考虑了需求,提出了一种新的无监督机器学习算法来检测自主移动机器人可能遇到的操作故障。该方法提出在决策层面采用多传感器信息融合集成,通过投票的方式提高决策的可靠性。该方法基于传感器数据的不相似度分割和自适应阈值控制。它已经在一个自主移动机器人上进行了实验测试。实验结果表明,该方法对操作异常检测是有效的。此外,所提议的投票机制还能够在使用单一信息来源的情况下消除误报。
{"title":"Unsupervised dissimilarity-based fault detection method for autonomous mobile robots","authors":"Mahmut Kasap,&nbsp;Metin Yılmaz,&nbsp;Eyüp Çinar,&nbsp;Ahmet Yazıcı","doi":"10.1007/s10514-023-10144-2","DOIUrl":"10.1007/s10514-023-10144-2","url":null,"abstract":"<div><p>Autonomous robots are one of the critical components in modern manufacturing systems. For this reason, the uninterrupted operation of robots in manufacturing is important for the sustainability of autonomy. Detecting possible fault symptoms that may cause failures within a work environment will help to eliminate interrupted operations. When supervised learning methods are considered, obtaining and storing labeled, historical training data in a manufacturing environment with faults is a challenging task. In addition, sensors in mobile devices such as robots are exposed to different noisy external conditions in production environments affecting data labels and fault mapping. Furthermore, relying on a single sensor data for fault detection often causes false alarms for equipment monitoring. Our study takes requirements into consideration and proposes a new unsupervised machine-learning algorithm to detect possible operational faults encountered by autonomous mobile robots. The method suggests using an ensemble of multi-sensor information fusion at the decision level by voting to enhance decision reliability. The proposed technique relies on dissimilarity-based sensor data segmentation with an adaptive threshold control. It has been tested experimentally on an autonomous mobile robot. The experimental results show that the proposed method is effective for detecting operational anomalies. Furthermore, the proposed voting mechanism is also capable of eliminating false positives in case of a single source of information is utilized.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 8","pages":"1503 - 1518"},"PeriodicalIF":3.5,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Autonomous Robots
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1