Guided navigation from multiple viewpoints using qualitative spatial reasoning

IF 1.6 4区 心理学 Q3 PSYCHOLOGY, EXPERIMENTAL Spatial Cognition and Computation Pub Date : 2020-11-03 DOI:10.1080/13875868.2020.1857386
D. Perico, P. Santos, Reinaldo A. C. Bianchi
{"title":"Guided navigation from multiple viewpoints using qualitative spatial reasoning","authors":"D. Perico, P. Santos, Reinaldo A. C. Bianchi","doi":"10.1080/13875868.2020.1857386","DOIUrl":null,"url":null,"abstract":"ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":null,"pages":null},"PeriodicalIF":1.6000,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spatial Cognition and Computation","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/13875868.2020.1857386","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 3

Abstract

ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用定性空间推理从多个视点引导导航
导航是移动智能体完全自主并能够执行复杂动作的基本能力。然而,对于具有有限(或没有)感知世界,或缺乏完全定义的运动模型的代理的导航问题,在AI和机器人研究中很少受到关注。解决这个问题的一种方法是使用引导导航,其中其他具有感知能力的自主智能体可以结合他们不同的观点来推断定位和适当的命令,以引导缺乏感官的智能体通过特定的路径。由于对被引导代理的物理和感知特征的了解有限,这项任务应该在抽象层面上进行,允许使用通用的运动模型和高级命令,这些命令可以被任何类型的自主代理(包括人类)应用。在这项工作中考虑的主要任务是,给定一组独立的、以自我为中心的和局部视觉传感器感知其共同环境的自主代理,能够产生一组高级命令(包括定性方向:例如向左移动,直走)的算法的开发和评估,能够引导感官剥夺的机器人到达目标位置。为了实现这一点,本文假设了定性空间推理形式的关系,称为StarVars,其推理方法也被用于建立域的模型。本文提出了两种基于粒子滤波和定性空间关系的定性概率制导算法。在第一种算法中,粒子滤波是在域的定性表示上运行的,而第二种算法将标准粒子滤波的数值输出转化为定性关系来引导感官剥夺机器人。在二维人形机器人模拟器上进行了实验,对所提出的方法进行了评估。最后给出了在一组真实的类人机器人上执行该算法的概念验证。结果表明,本文提出的导航模型是成功的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Spatial Cognition and Computation
Spatial Cognition and Computation PSYCHOLOGY, EXPERIMENTAL-
CiteScore
4.40
自引率
5.30%
发文量
10
期刊最新文献
In memoriam: Christian Freksa (1950-2020) Treat robots as humans? Perspective choice in human-human and human-robot spatial language interaction Direction information is more influential than distance information in memory for location relative to landmarks Task-dependent sketch maps Evidence for flexible navigation strategies during spatial learning involving path choices
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1