{"title":"Ancillary mechanism for autonomous decision-making process in asymmetric confrontation: a view from Gomoku","authors":"Chen Han, Xuanyin Wang","doi":"10.1080/0952813X.2022.2067249","DOIUrl":null,"url":null,"abstract":"ABSTRACT This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.","PeriodicalId":15677,"journal":{"name":"Journal of Experimental & Theoretical Artificial Intelligence","volume":"94 1","pages":"1141 - 1159"},"PeriodicalIF":1.7000,"publicationDate":"2022-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental & Theoretical Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0952813X.2022.2067249","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.
期刊介绍:
Journal of Experimental & Theoretical Artificial Intelligence (JETAI) is a world leading journal dedicated to publishing high quality, rigorously reviewed, original papers in artificial intelligence (AI) research.
The journal features work in all subfields of AI research and accepts both theoretical and applied research. Topics covered include, but are not limited to, the following:
• cognitive science
• games
• learning
• knowledge representation
• memory and neural system modelling
• perception
• problem-solving