{"title":"Guided navigation from multiple viewpoints using qualitative spatial reasoning","authors":"D. Perico, P. Santos, Reinaldo A. C. Bianchi","doi":"10.1080/13875868.2020.1857386","DOIUrl":null,"url":null,"abstract":"ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.","PeriodicalId":46199,"journal":{"name":"Spatial Cognition and Computation","volume":"1 1","pages":"143 - 172"},"PeriodicalIF":1.6000,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Spatial Cognition and Computation","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/13875868.2020.1857386","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 3
Abstract
ABSTRACT Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localization and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location. In order to accomplish this, the present paper assumes relations from the qualitative spatial reasoning formalism called StarVars, whose inference method is also used to build a model of the domain. This paper presents two qualitative-probabilistic algorithms for guided navigation using a particle filter and qualitative spatial relations. In the first algorithm, the particle filter is run upon a qualitative representation of the domain, whereas the second algorithm transforms the numerical output of a standard particle filter into qualitative relations to guide a sensory deprived robot. The proposed methods were evaluated with experiments carried out on a 2D humanoid robot simulator. A proof of concept executing the algorithms on a group of real humanoid robots is also presented. The results obtained demonstrate the success of the guided navigation models proposed in this work.