多任务博弈的进化多模态网络

Jacob Schrum, R. Miikkulainen
{"title":"多任务博弈的进化多模态网络","authors":"Jacob Schrum, R. Miikkulainen","doi":"10.1109/CIG.2011.6031995","DOIUrl":null,"url":null,"abstract":"Intelligent opponent behavior helps make video games interesting to human players. Evolutionary computation can discover such behavior, especially when the game consists of a single task. However, multitask domains, in which separate tasks within the domain each have their own dynamics and objectives, can be challenging for evolution. This paper proposes two methods for meeting this challenge by evolving neural networks: 1) Multitask Learning provides a network with distinct outputs per task, thus evolving a separate policy for each task, and 2) Mode Mutation provides a means to evolve new output modes, as well as a way to select which mode to use at each moment. Multitask Learning assumes agents know which task they are currently facing; if such information is available and accurate, this approach works very well, as demonstrated in the Front/Back Ramming game of this paper. In contrast, Mode Mutation discovers an appropriate task division on its own, which may in some cases be even more powerful than a human-specified task division, as shown in the Predator/Prey game of this paper. These results demonstrate the importance of both Multitask Learning and Mode Mutation for learning intelligent behavior in complex games.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"8 1","pages":"102-109"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"32","resultStr":"{\"title\":\"Evolving multimodal networks for multitask games\",\"authors\":\"Jacob Schrum, R. Miikkulainen\",\"doi\":\"10.1109/CIG.2011.6031995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Intelligent opponent behavior helps make video games interesting to human players. Evolutionary computation can discover such behavior, especially when the game consists of a single task. However, multitask domains, in which separate tasks within the domain each have their own dynamics and objectives, can be challenging for evolution. This paper proposes two methods for meeting this challenge by evolving neural networks: 1) Multitask Learning provides a network with distinct outputs per task, thus evolving a separate policy for each task, and 2) Mode Mutation provides a means to evolve new output modes, as well as a way to select which mode to use at each moment. Multitask Learning assumes agents know which task they are currently facing; if such information is available and accurate, this approach works very well, as demonstrated in the Front/Back Ramming game of this paper. In contrast, Mode Mutation discovers an appropriate task division on its own, which may in some cases be even more powerful than a human-specified task division, as shown in the Predator/Prey game of this paper. These results demonstrate the importance of both Multitask Learning and Mode Mutation for learning intelligent behavior in complex games.\",\"PeriodicalId\":6594,\"journal\":{\"name\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"8 1\",\"pages\":\"102-109\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-09-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"32\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2011.6031995\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2011.6031995","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 32

摘要

聪明的对手行为有助于让电子游戏对人类玩家来说更有趣。进化计算可以发现这种行为,特别是当游戏由单一任务组成时。然而,在多任务领域中,领域内的独立任务每个都有自己的动态和目标,这对进化来说是具有挑战性的。本文提出了通过进化神经网络来应对这一挑战的两种方法:1)多任务学习提供了一个每个任务具有不同输出的网络,从而为每个任务演化出单独的策略;2)模式突变提供了一种进化新的输出模式的方法,以及一种选择在每个时刻使用哪种模式的方法。多任务学习假设智能体知道他们当前面临的任务;如果这些信息是可用且准确的,那么这种方法就会非常有效,正如本文的前/后夯游戏所展示的那样。相比之下,Mode Mutation自己发现了一个适当的任务划分,在某些情况下,它可能比人类指定的任务划分更强大,如本文的捕食者/猎物游戏所示。这些结果证明了多任务学习和模式突变对于学习复杂游戏中的智能行为的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Evolving multimodal networks for multitask games
Intelligent opponent behavior helps make video games interesting to human players. Evolutionary computation can discover such behavior, especially when the game consists of a single task. However, multitask domains, in which separate tasks within the domain each have their own dynamics and objectives, can be challenging for evolution. This paper proposes two methods for meeting this challenge by evolving neural networks: 1) Multitask Learning provides a network with distinct outputs per task, thus evolving a separate policy for each task, and 2) Mode Mutation provides a means to evolve new output modes, as well as a way to select which mode to use at each moment. Multitask Learning assumes agents know which task they are currently facing; if such information is available and accurate, this approach works very well, as demonstrated in the Front/Back Ramming game of this paper. In contrast, Mode Mutation discovers an appropriate task division on its own, which may in some cases be even more powerful than a human-specified task division, as shown in the Predator/Prey game of this paper. These results demonstrate the importance of both Multitask Learning and Mode Mutation for learning intelligent behavior in complex games.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Human gesture classification by brute-force machine learning for exergaming in physiotherapy Evolving micro for 3D Real-Time Strategy games Constrained surprise search for content generation Design influence on player retention: A method based on time varying survival analysis Deep Q-learning using redundant outputs in visual doom
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1