Planning and acting in dynamic environments: identifying and avoiding dangerous situations

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Experimental & Theoretical Artificial Intelligence Pub Date : 2021-06-30 DOI:10.1080/0952813X.2021.1938697
L. Chrpa, M. Pilát, Jakub Gemrot
{"title":"Planning and acting in dynamic environments: identifying and avoiding dangerous situations","authors":"L. Chrpa, M. Pilát, Jakub Gemrot","doi":"10.1080/0952813X.2021.1938697","DOIUrl":null,"url":null,"abstract":"ABSTRACT In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour – purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios – a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.","PeriodicalId":15677,"journal":{"name":"Journal of Experimental & Theoretical Artificial Intelligence","volume":"10 1","pages":"925 - 948"},"PeriodicalIF":1.7000,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental & Theoretical Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0952813X.2021.1938697","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

ABSTRACT In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour – purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios – a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在动态环境中进行计划和行动:识别和避免危险情况
在动态环境中,外部事件可能在智能体不同意的情况下发生并改变环境。因此,代理的计划可能会被打乱,更糟糕的是,代理可能最终处于死胡同状态,不再能够实现它们的目标。因此,代理应该在计划执行期间监视环境,如果遇到危险情况,他们应该(反应性地)采取行动以逃离它。本文引入了智能体在动态环境中执行计划时可能遇到的危险状态的概念。提出了一种计算一系列动作后状态危险下界的方法。这种方法被用来确定代理人必须开始采取行动以避免危险的情况。我们提出了两种类型的这样的行为-纯反应和主动(消除危险的来源)。我们在两个场景中执行并测试了危险状态下的规划概念——一个是简单的rpg类游戏《黑暗地牢》,另一个是受改革电子游戏启发的平台游戏。结果表明,危险状态下的推理比单纯的计划或基于规则的技术获得了更好的成功率(达到目标)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.10
自引率
4.50%
发文量
89
审稿时长
>12 weeks
期刊介绍: Journal of Experimental & Theoretical Artificial Intelligence (JETAI) is a world leading journal dedicated to publishing high quality, rigorously reviewed, original papers in artificial intelligence (AI) research. The journal features work in all subfields of AI research and accepts both theoretical and applied research. Topics covered include, but are not limited to, the following: • cognitive science • games • learning • knowledge representation • memory and neural system modelling • perception • problem-solving
期刊最新文献
Occlusive target recognition method of sorting robot based on anchor-free detection network An effectual underwater image enhancement framework using adaptive trans-resunet ++ with attention mechanism An experimental study of sentiment classification using deep-based models with various word embedding techniques Sign language video to text conversion via optimised LSTM with improved motion estimation An efficient safest route prediction-based route discovery mechanism for drivers using improved golden tortoise beetle optimizer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1