{"title":"Planning and acting in dynamic environments: identifying and avoiding dangerous situations","authors":"L. Chrpa, M. Pilát, Jakub Gemrot","doi":"10.1080/0952813X.2021.1938697","DOIUrl":null,"url":null,"abstract":"ABSTRACT In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour – purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios – a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.","PeriodicalId":15677,"journal":{"name":"Journal of Experimental & Theoretical Artificial Intelligence","volume":"10 1","pages":"925 - 948"},"PeriodicalIF":1.7000,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental & Theoretical Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0952813X.2021.1938697","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour – purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios – a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.
期刊介绍:
Journal of Experimental & Theoretical Artificial Intelligence (JETAI) is a world leading journal dedicated to publishing high quality, rigorously reviewed, original papers in artificial intelligence (AI) research.
The journal features work in all subfields of AI research and accepts both theoretical and applied research. Topics covered include, but are not limited to, the following:
• cognitive science
• games
• learning
• knowledge representation
• memory and neural system modelling
• perception
• problem-solving