首页 > 最新文献

2016 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

英文 中文
Intrinsically motivated reinforcement learning: A promising framework for procedural content generation 内在动机强化学习:程序内容生成的一个有前途的框架
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860450
Noor Shaker
So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.
到目前为止,进化算法一直是程序内容生成(PCG)的主导范例。虽然我们认为该领域已经取得了显著的成功,但我们认为仍有很大的改进余地。机器学习领域有大量的方法可以解决PCG的某些方面,但这些方面的研究仍然不足。在本文中,我们提倡使用内在动机强化学习来生成内容。一类因知识本身而兴盛的方法,而不是作为寻找解决方案的一个步骤。我们认为,这种方法有望解决PCG中一些众所周知的问题:(1)寻找新颖性和多样性可以很容易地作为一种内在奖励,(2)通过结合外在和内在奖励,可以同时改善玩家体验模型和生成适应性内容,(3)混合主动设计工具可以融入更多关于设计师及其偏好的知识,并最终提供更好的帮助。我们展示了我们的论点,并讨论了所提出的方法所面临的挑战。
{"title":"Intrinsically motivated reinforcement learning: A promising framework for procedural content generation","authors":"Noor Shaker","doi":"10.1109/CIG.2016.7860450","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860450","url":null,"abstract":"So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"128 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88700978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Using opponent models to train inexperienced synthetic agents in social environments 使用对手模型在社会环境中训练没有经验的人工智能体
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860409
C. Kiourt, Dimitris Kalles
This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.
本文研究了没有经验的智能体在竞争性博弈社会环境中的学习进度。我们的目标是确定知识渊博的对手对新手学习者的影响。为此,我们使用了合成代理,其游戏行为是通过各种强化学习设置开发的,例如利用与探索权衡,学习备份和学习速度,作为对手,以及自我训练的代理。最后,本文强调了在竞争多智能体环境中,不同的知识合成智能体对缺乏经验的智能体学习轨迹的影响。
{"title":"Using opponent models to train inexperienced synthetic agents in social environments","authors":"C. Kiourt, Dimitris Kalles","doi":"10.1109/CIG.2016.7860409","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860409","url":null,"abstract":"This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"73 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86409522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Investigating vanilla MCTS scaling on the GVG-AI game corpus 在GVG-AI游戏语料库上研究香草MCTS缩放
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860443
M. Nelson
The General Video Game AI Competition (GVG-AI) invites submissions of controllers to play games specified in the Video Game Description Language (VGDL), testing them against each other and several baselines. One of the baselines that has done surprisingly well in some of the competitions is sampleMCTS, a straightforward implementation of Monte Carlo tree search (MCTS). Although it has done worse in other iterations of the competition, this has produced a nagging worry to us that perhaps the GVG-AI competition might be too easy, especially since performance profiling suggests that significant increases in number of MCTS iterations that can be completed in a given time limit will be possible through optimizations to the GVG-AI competition framework. To better understand the potential performance of the baseline vanilla MCTS controller, I perform scaling experiments, running it against the 62 games in the public GVG-AI corpus as the time budget is varied from about 1/30 of that in the current competition, through around 30x the current competition's budget. I find that it does not in fact master the games even given 30x the current time budget, so the challenge of the GVG-AI competition is safe (at least against this baseline). However, I do find that given enough computational budget, it manages to avoid explicitly losing on most games, despite failing to win them and ultimately losing as time expires, suggesting an asymmetry in the current GVG-AI competition's challenge: not losing is significantly easier than winning.
通用电子游戏人工智能竞赛(GVG-AI)邀请控制器提交视频游戏描述语言(VGDL)指定的游戏,对它们进行相互测试和几个基线。在一些竞赛中表现出色的基线之一是sampleMCTS,它是蒙特卡罗树搜索(MCTS)的一个直接实现。尽管它在竞争的其他迭代中表现更差,但这给我们带来了一个唠叨的担忧,即也许GVG-AI竞争可能太容易了,特别是因为性能分析表明,通过优化GVG-AI竞争框架,可以在给定的时间限制内完成MCTS迭代的数量显着增加。为了更好地理解基线香草MCTS控制器的潜在性能,我执行了缩放实验,在公共gvr - ai语料库中的62个游戏中运行它,因为时间预算从当前竞争的1/30到当前竞争预算的30倍不等。我发现即使给定当前时间预算的30倍,它实际上也无法掌握游戏,所以GVG-AI竞争的挑战是安全的(至少在这个基线上)。然而,我确实发现,在给定足够的计算预算的情况下,它设法避免在大多数游戏中明显失败,尽管未能赢得它们,并最终随着时间的流逝而失败,这表明了当前GVG-AI竞争挑战的不对称性:不输比赢要容易得多。
{"title":"Investigating vanilla MCTS scaling on the GVG-AI game corpus","authors":"M. Nelson","doi":"10.1109/CIG.2016.7860443","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860443","url":null,"abstract":"The General Video Game AI Competition (GVG-AI) invites submissions of controllers to play games specified in the Video Game Description Language (VGDL), testing them against each other and several baselines. One of the baselines that has done surprisingly well in some of the competitions is sampleMCTS, a straightforward implementation of Monte Carlo tree search (MCTS). Although it has done worse in other iterations of the competition, this has produced a nagging worry to us that perhaps the GVG-AI competition might be too easy, especially since performance profiling suggests that significant increases in number of MCTS iterations that can be completed in a given time limit will be possible through optimizations to the GVG-AI competition framework. To better understand the potential performance of the baseline vanilla MCTS controller, I perform scaling experiments, running it against the 62 games in the public GVG-AI corpus as the time budget is varied from about 1/30 of that in the current competition, through around 30x the current competition's budget. I find that it does not in fact master the games even given 30x the current time budget, so the challenge of the GVG-AI competition is safe (at least against this baseline). However, I do find that given enough computational budget, it manages to avoid explicitly losing on most games, despite failing to win them and ultimately losing as time expires, suggesting an asymmetry in the current GVG-AI competition's challenge: not losing is significantly easier than winning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"186 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76859933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Modeling believable game characters 塑造可信的游戏角色
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860412
Hanneke Kersjes, P. Spronck
The behavior of virtual characters in computer games is usually determined solely by decision trees or finite state machines, which is detrimental to the characters' believability. It has been argued that enhancing the virtual characters with emotions, personalities, and moods, may make their behavior more diverse and thus more believable. Most research in this direction is based on existing (socio-)psychological literature, but not tested in a suitable experimental setting where humans interact with the virtual characters. In our research, we use a simplified version of the personality model of Ochs et al. [1], which we test in a game which has human participants interact with three agents with different personalities: an extraverted agent, a neurotic agent, and a neutral agent. The model only influences the agents' emotions, which are only exhibited by their facial expressions. The participants were asked to assess the agents' personality based on six possible traits. We found that the participants considered the neurotic agent as the most neurotic, while there are also indications that the extraverted agent was considered the most extraverted. We conclude that players will indeed distinguish personality differences between agents based on their facial expression of emotions. Therefore, using a personality model may make it easy for game developers to quickly create a high variety of virtual characters, who exhibit individual behaviors, making them more believable.
电脑游戏中虚拟角色的行为通常仅由决策树或有限状态机决定,这不利于角色的可信度。有人认为,通过情感、个性和情绪来增强虚拟角色,可能会使他们的行为更加多样化,从而更加可信。在这个方向上的大多数研究都是基于现有的(社会)心理学文献,但没有在人类与虚拟人物互动的合适实验环境中进行测试。在我们的研究中,我们使用了Ochs等人[1]的人格模型的简化版本,我们在一个游戏中测试了该模型,该游戏让人类参与者与三个具有不同人格的代理互动:外向代理,神经质代理和中性代理。该模型只影响代理人的情绪,而这些情绪只通过他们的面部表情表现出来。参与者被要求根据六个可能的特征来评估代理人的性格。我们发现,参与者认为神经质因子是最神经质的,同时也有迹象表明外向因子被认为是最外向的。我们的结论是,玩家确实会根据他们的面部表情来区分代理人之间的性格差异。因此,使用个性模型可以让游戏开发者更容易地快速创造出各种各样的虚拟角色,这些角色表现出独特的行为,使他们更可信。
{"title":"Modeling believable game characters","authors":"Hanneke Kersjes, P. Spronck","doi":"10.1109/CIG.2016.7860412","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860412","url":null,"abstract":"The behavior of virtual characters in computer games is usually determined solely by decision trees or finite state machines, which is detrimental to the characters' believability. It has been argued that enhancing the virtual characters with emotions, personalities, and moods, may make their behavior more diverse and thus more believable. Most research in this direction is based on existing (socio-)psychological literature, but not tested in a suitable experimental setting where humans interact with the virtual characters. In our research, we use a simplified version of the personality model of Ochs et al. [1], which we test in a game which has human participants interact with three agents with different personalities: an extraverted agent, a neurotic agent, and a neutral agent. The model only influences the agents' emotions, which are only exhibited by their facial expressions. The participants were asked to assess the agents' personality based on six possible traits. We found that the participants considered the neurotic agent as the most neurotic, while there are also indications that the extraverted agent was considered the most extraverted. We conclude that players will indeed distinguish personality differences between agents based on their facial expression of emotions. Therefore, using a personality model may make it easy for game developers to quickly create a high variety of virtual characters, who exhibit individual behaviors, making them more believable.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"159 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77208657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Heterogeneous team deep q-learning in low-dimensional multi-agent environments 低维多智能体环境下的异构团队深度q学习
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860413
Mateusz Kurek, Wojciech Jaśkowski
Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.
深度Q-Learning是一种有效的强化学习方法,最近在一组雅达利2600游戏中获得了人类水平的表现。值得注意的是,该系统是在高维原始视觉数据上进行训练的。深度Q-Learning对涉及低维状态空间的问题同样有效吗?为了回答这个问题,我们在一个Keepaway足球问题上评估了深度Q-Learning的组成部分(深度架构、经验回放、目标网络冻结和元状态),其中状态仅由13个变量描述。结果表明,虽然经验重放确实提高了智能体的性能,但目标网络冻结和元状态减慢了学习过程。此外,深层架构对这项任务没有帮助,因为只有两个隐藏层的相当浅的网络工作得最好。通过选择最佳设置,并采用异构团队学习,我们能够超越之前应用于Keepaway足球的所有方法,而使用的计算费用只是亚军的一小部分。这些结果扩展了我们对Deep Q-Learning在低维强化学习任务中的有效性的理解。
{"title":"Heterogeneous team deep q-learning in low-dimensional multi-agent environments","authors":"Mateusz Kurek, Wojciech Jaśkowski","doi":"10.1109/CIG.2016.7860413","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860413","url":null,"abstract":"Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"74 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85188453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Semi-automated level design via auto-playtesting for handheld casual game creation 基于自动测试的半自动关卡设计,适用于手持休闲游戏制作
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860438
E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson
We provide a proof of principle that novel and engaging mobile casual games with new aesthetics, game mechanics and player interactions can be designed and tested directly on the device for which they are intended. We describe the Gamika iOS application which includes generative art assets; a design interface enabling the making of physics-based casual games containing multiple levels with aspects ranging from Frogger-like to Asteroids-like and beyond; a configurable automated playtester which can give feedback on the playability of levels; and an automated fine-tuning engine which searches for level parameterisations that enable the game to pass a battery of tests, as evaluated by the auto-playtester. Each aspect of the implementation represents a baseline with much room for improvement, and we present some experimental results and describe how these will guide the future directions for Gamika.
我们提供了一个原则证明,即具有新美学,游戏机制和玩家互动的新颖且引人入胜的手机休闲游戏可以直接在其目标设备上设计和测试。我们描述了包含生成艺术资产的Gamika iOS应用程序;设计界面能够创造基于物理的休闲游戏,包含多个关卡,从《青蛙》到《小行星》等等;可以提供关卡可玩性反馈的可配置自动测试器;自动微调引擎搜索关卡参数,使游戏能够通过一系列测试,并由自动测试员进行评估。执行的每个方面都代表着一个有很大改进空间的基线,我们呈现了一些实验结果,并描述了这些结果将如何指导Gamika的未来方向。
{"title":"Semi-automated level design via auto-playtesting for handheld casual game creation","authors":"E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson","doi":"10.1109/CIG.2016.7860438","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860438","url":null,"abstract":"We provide a proof of principle that novel and engaging mobile casual games with new aesthetics, game mechanics and player interactions can be designed and tested directly on the device for which they are intended. We describe the Gamika iOS application which includes generative art assets; a design interface enabling the making of physics-based casual games containing multiple levels with aspects ranging from Frogger-like to Asteroids-like and beyond; a configurable automated playtester which can give feedback on the playability of levels; and an automated fine-tuning engine which searches for level parameterisations that enable the game to pass a battery of tests, as evaluated by the auto-playtester. Each aspect of the implementation represents a baseline with much room for improvement, and we present some experimental results and describe how these will guide the future directions for Gamika.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"101 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83152619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
MCTS/EA hybrid GVGAI players and game difficulty estimation MCTS/EA混合GVGAI玩家和游戏难度估算
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860384
Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss
In the General Video Game Playing competitions of the last years, Monte-Carlo tree search as well as Evolutionary Algorithm based controllers have been successful. However, both approaches have certain weaknesses, suggesting that certain hybrids could outperform both. We envision and experimentally compare several types of hybrids of two basic approaches, as well as some possible extensions. In order to achieve a better understanding of the games in the competition and the strength and weaknesses of different controllers, we also propose and apply a novel game difficulty estimation scheme based on several observable game characteristics.
在过去几年的通用电子游戏比赛中,蒙特卡洛树搜索和基于进化算法的控制器都取得了成功。然而,这两种方法都有一定的弱点,这表明某些混合方法可能比这两种方法都要好。我们设想并实验比较了两种基本方法的几种混合类型,以及一些可能的扩展。为了更好地理解比赛中的游戏和不同控制器的优缺点,我们还提出并应用了一种基于几个可观察游戏特征的游戏难度估计方案。
{"title":"MCTS/EA hybrid GVGAI players and game difficulty estimation","authors":"Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss","doi":"10.1109/CIG.2016.7860384","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860384","url":null,"abstract":"In the General Video Game Playing competitions of the last years, Monte-Carlo tree search as well as Evolutionary Algorithm based controllers have been successful. However, both approaches have certain weaknesses, suggesting that certain hybrids could outperform both. We envision and experimentally compare several types of hybrids of two basic approaches, as well as some possible extensions. In order to achieve a better understanding of the games in the competition and the strength and weaknesses of different controllers, we also propose and apply a novel game difficulty estimation scheme based on several observable game characteristics.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"255 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89183411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Artefacts: Minecraft meets collaborative interactive evolution 人工制品:《我的世界》符合协作互动进化
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860434
Cristinel Patrascu, S. Risi
Procedural content generation has shown promise in a variety of different games. In this paper we introduce a new kind of game, called Artefacts, that combines a sandbox-like environment akin to Minecraft with the ability to interactively evolve unique three-dimensional building blocks. Artefacts does not only allow players to collaborate by building larger structures from evolved objects but also to continue evolution of others' artefacts. Results from playtests on three different game iterations indicate that players generally enjoy playing the game and are able to discover a wide variety of different 3D objects. Morever, while there is no explicit goal in Artefacts, the sandbox environment together with the ability to evolve unique shapes does allow for some interesting gameplay to emerge.
程序内容生成在许多不同的游戏中都表现出了前景。在本文中,我们将介绍一种名为《artifact》的新游戏,它将类似于《我的世界》的沙盒环境与互动进化独特的三维构建块的能力结合在一起。人工制品不仅允许玩家通过进化的物体建造更大的结构,还允许玩家继续进化其他人的人工制品。三种不同游戏迭代的测试结果表明,玩家普遍喜欢玩游戏,能够发现各种不同的3D物体。此外,虽然在《artifact》中没有明确的目标,但沙盒环境以及进化独特形状的能力确实允许出现一些有趣的游戏玩法。
{"title":"Artefacts: Minecraft meets collaborative interactive evolution","authors":"Cristinel Patrascu, S. Risi","doi":"10.1109/CIG.2016.7860434","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860434","url":null,"abstract":"Procedural content generation has shown promise in a variety of different games. In this paper we introduce a new kind of game, called Artefacts, that combines a sandbox-like environment akin to Minecraft with the ability to interactively evolve unique three-dimensional building blocks. Artefacts does not only allow players to collaborate by building larger structures from evolved objects but also to continue evolution of others' artefacts. Results from playtests on three different game iterations indicate that players generally enjoy playing the game and are able to discover a wide variety of different 3D objects. Morever, while there is no explicit goal in Artefacts, the sandbox environment together with the ability to evolve unique shapes does allow for some interesting gameplay to emerge.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"15 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81970931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Intrinsically motivated general companion NPCs via Coupled Empowerment Maximisation 内在动机一般同伴npc通过耦合授权最大化
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860406
C. Guckelsberger, Christoph Salge, S. Colton
Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems.
游戏中的非玩家角色(npc)通常是硬编码的,或者依赖于预先设定的目标,因此他们很难在不断变化和不可预测的游戏世界中做出明智的行为。为了让它们适应程序内容生成的新发展,我们引入了耦合授权最大化原则作为游戏npc的内在动机。我们专注于开发一个通用的游戏伴侣,旨在帮助玩家实现他们的目标。我们根据三个直观和抽象的同伴职责来评估我们的方法。我们在地下城爬行游戏测试平台中为每个任务开发专门的场景,并提供定性证据,证明突发NPC行为履行了这些任务。我们认为这种通用方法可以加速NPC AI的开发,改善自动游戏进化,并将NPC引入完整的游戏生成系统。
{"title":"Intrinsically motivated general companion NPCs via Coupled Empowerment Maximisation","authors":"C. Guckelsberger, Christoph Salge, S. Colton","doi":"10.1109/CIG.2016.7860406","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860406","url":null,"abstract":"Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"75 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76011561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Evolving micro for 3D Real-Time Strategy games 3D即时策略游戏的进化微系统
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860437
T. DeWitt, S. Louis, Siming Liu
This paper extends prior work in generating two dimensional micro for Real-Time Strategy games to three dimensions. We extend our influence map and potential fields representation to three dimensions and compare two hill-climbers with a genetic algorithm on the problem of generating high performance influence map, potential field, and reactive control parameters that control the behavior of units in an open source Real-Time Strategy game. Results indicate that genetic algorithms evolve better behaviors for ranged units that inflict damage on enemies while kiting to avoid damage. Additionally, genetic algorithms evolve better behaviors for melee units that concentrate firepower on selective enemies to decrease the opposing army's effectiveness. Evolved behaviors, particularly for ranged units, generalize well to new scenarios. Our work thus provides evidence for the viability of an influence map and potential fields based representation for reactive control algorithms in games, 3D simulations, and aerial vehicle swarms.
本文将生成实时策略游戏的二维微系统的先前工作扩展到三维。我们将我们的影响图和势场表示扩展到三维,并将两个爬山者与遗传算法进行比较,以生成高性能的影响图、势场和控制开源实时战略游戏中单位行为的反应性控制参数。结果表明,遗传算法进化出了更好的行为,可以让远程单位在放风筝时对敌人造成伤害以避免伤害。此外,遗传算法进化出更好的近战单位行为,将火力集中在选定的敌人身上,从而降低对方军队的效率。进化的行为,特别是远程单位,可以很好地推广到新的场景中。因此,我们的工作为游戏、3D模拟和飞行器群中基于反应性控制算法的影响图和势场表示的可行性提供了证据。
{"title":"Evolving micro for 3D Real-Time Strategy games","authors":"T. DeWitt, S. Louis, Siming Liu","doi":"10.1109/CIG.2016.7860437","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860437","url":null,"abstract":"This paper extends prior work in generating two dimensional micro for Real-Time Strategy games to three dimensions. We extend our influence map and potential fields representation to three dimensions and compare two hill-climbers with a genetic algorithm on the problem of generating high performance influence map, potential field, and reactive control parameters that control the behavior of units in an open source Real-Time Strategy game. Results indicate that genetic algorithms evolve better behaviors for ranged units that inflict damage on enemies while kiting to avoid damage. Additionally, genetic algorithms evolve better behaviors for melee units that concentrate firepower on selective enemies to decrease the opposing army's effectiveness. Evolved behaviors, particularly for ranged units, generalize well to new scenarios. Our work thus provides evidence for the viability of an influence map and potential fields based representation for reactive control algorithms in games, 3D simulations, and aerial vehicle swarms.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"14 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73991380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2016 IEEE Conference on Computational Intelligence and Games (CIG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1