非对称对抗中自主决策过程的辅助机制:来自Gomoku的观点

IF 1.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Journal of Experimental & Theoretical Artificial Intelligence Pub Date : 2022-05-02 DOI:10.1080/0952813X.2022.2067249
Chen Han, Xuanyin Wang
{"title":"非对称对抗中自主决策过程的辅助机制:来自Gomoku的观点","authors":"Chen Han, Xuanyin Wang","doi":"10.1080/0952813X.2022.2067249","DOIUrl":null,"url":null,"abstract":"ABSTRACT This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.","PeriodicalId":15677,"journal":{"name":"Journal of Experimental & Theoretical Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2022-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ancillary mechanism for autonomous decision-making process in asymmetric confrontation: a view from Gomoku\",\"authors\":\"Chen Han, Xuanyin Wang\",\"doi\":\"10.1080/0952813X.2022.2067249\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.\",\"PeriodicalId\":15677,\"journal\":{\"name\":\"Journal of Experimental & Theoretical Artificial Intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2022-05-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Experimental & Theoretical Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1080/0952813X.2022.2067249\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Experimental & Theoretical Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1080/0952813X.2022.2067249","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

摘要本文研究了在不对称对抗环境下,智能体如何通过尝试不同的行动来学习和执行有效的策略。首先,我们以Gomoku为例分析了不对称对抗的原因和影响:先发者比后发者获得更高的权力。我们发现,先发者学习如何快速攻击,而后发者很难学习如何防御,因为它无法赢得先发者,并且总是获得负奖励。因此,游戏陷入僵局,先行者无法进一步前进以学习如何防御,而后来者则什么也学不到。其次,我们提出了一种辅助机制(AM),在智能体的行为中增加两个原则来克服这一困难。AM是一种引导agent降低学习难度,提高行为质量的方法。据我们所知,这是第一个定义强化学习中的不对称对抗并提出解决此类问题的方法的研究。在数值测试中,我们首先进行了一个简单的人类与人工智能实验,以校准非对称对抗中的学习过程。然后,采用15*15 Gomoku博弈实验,让两个agent(有AM和没有AM)竞争,检验AM的潜力。结果表明,在几乎相同的计算量下,添加AM可以使第一和第二动力都变得更强。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Ancillary mechanism for autonomous decision-making process in asymmetric confrontation: a view from Gomoku
ABSTRACT This paper investigates how agents learn and perform efficient strategies by trying different actions in asymmetric confrontation setting. Firstly, we use Gomoku as an example to analyse the causes and impacts of asymmetric confrontation: the first mover gains higher power than the second mover. We find that the first mover learns how to attack quickly while it is difficult for the second mover to learn how to defend since it cannot win the first mover and always receives negative rewards. As such, the game is stuck at a deadlock in which the first mover cannot make further advances to learn how to defend, and the second mover learns nothing. Secondly, we propose an ancillary mechanism (AM) to add two principles to the agent’s actions to overcome this difficulty. AM is a guidance for the agents to reduce the learning difficulty and to improve their behavioural quality. To the best of our knowledge, this is the first study to define asymmetric confrontation in reinforcement learning and propose approaches to tackle such problems. In the numerical tests, we first conduct a simple human vs AI experiment to calibrate the learning process in asymmetric confrontation. Then, an experiment of 15*15 Gomoku game by letting two agents (with AM and without AM) compete is applied to check the potential of AM. Results show that adding AM can make both the first and the second movers become stronger in almost the same amount of calculation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.10
自引率
4.50%
发文量
89
审稿时长
>12 weeks
期刊介绍: Journal of Experimental & Theoretical Artificial Intelligence (JETAI) is a world leading journal dedicated to publishing high quality, rigorously reviewed, original papers in artificial intelligence (AI) research. The journal features work in all subfields of AI research and accepts both theoretical and applied research. Topics covered include, but are not limited to, the following: • cognitive science • games • learning • knowledge representation • memory and neural system modelling • perception • problem-solving
期刊最新文献
Occlusive target recognition method of sorting robot based on anchor-free detection network An effectual underwater image enhancement framework using adaptive trans-resunet ++ with attention mechanism An experimental study of sentiment classification using deep-based models with various word embedding techniques Sign language video to text conversion via optimised LSTM with improved motion estimation An efficient safest route prediction-based route discovery mechanism for drivers using improved golden tortoise beetle optimizer
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1