Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860450
Noor Shaker
So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.
{"title":"Intrinsically motivated reinforcement learning: A promising framework for procedural content generation","authors":"Noor Shaker","doi":"10.1109/CIG.2016.7860450","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860450","url":null,"abstract":"So far, Evolutionary Algorithms (EA) have been the dominant paradigm for Procedural Content Generation (PCG). While we believe the field has achieved a remarkable success, we claim that there is a wide window for improvement. The field of machine learning has an abundance of methods that promise solutions to some aspects of PCG that are still under-researched. In this paper, we advocate the use of Intrinsically motivated reinforcement learning for content generation. A class of methods that thrive for knowledge for its own sake rather than as a step towards finding a solution. We argue that this approach promises solutions to some of the well-known problems in PCG: (1) searching for novelty and diversity can be easily incorporated as an intrinsic reward, (2) improving models of player experience and generation of adapted content can be done simultaneously through combining extrinsic and intrinsic rewards, and (3) mix-initiative design tools can incorporate more knowledge about the designer and her preferences and ultimately provide better assistance. We demonstrate our arguments and discuss the challenges that face the proposed approach.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"128 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88700978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860409
C. Kiourt, Dimitris Kalles
This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.
{"title":"Using opponent models to train inexperienced synthetic agents in social environments","authors":"C. Kiourt, Dimitris Kalles","doi":"10.1109/CIG.2016.7860409","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860409","url":null,"abstract":"This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"73 1","pages":"1-4"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86409522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860443
M. Nelson
The General Video Game AI Competition (GVG-AI) invites submissions of controllers to play games specified in the Video Game Description Language (VGDL), testing them against each other and several baselines. One of the baselines that has done surprisingly well in some of the competitions is sampleMCTS, a straightforward implementation of Monte Carlo tree search (MCTS). Although it has done worse in other iterations of the competition, this has produced a nagging worry to us that perhaps the GVG-AI competition might be too easy, especially since performance profiling suggests that significant increases in number of MCTS iterations that can be completed in a given time limit will be possible through optimizations to the GVG-AI competition framework. To better understand the potential performance of the baseline vanilla MCTS controller, I perform scaling experiments, running it against the 62 games in the public GVG-AI corpus as the time budget is varied from about 1/30 of that in the current competition, through around 30x the current competition's budget. I find that it does not in fact master the games even given 30x the current time budget, so the challenge of the GVG-AI competition is safe (at least against this baseline). However, I do find that given enough computational budget, it manages to avoid explicitly losing on most games, despite failing to win them and ultimately losing as time expires, suggesting an asymmetry in the current GVG-AI competition's challenge: not losing is significantly easier than winning.
{"title":"Investigating vanilla MCTS scaling on the GVG-AI game corpus","authors":"M. Nelson","doi":"10.1109/CIG.2016.7860443","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860443","url":null,"abstract":"The General Video Game AI Competition (GVG-AI) invites submissions of controllers to play games specified in the Video Game Description Language (VGDL), testing them against each other and several baselines. One of the baselines that has done surprisingly well in some of the competitions is sampleMCTS, a straightforward implementation of Monte Carlo tree search (MCTS). Although it has done worse in other iterations of the competition, this has produced a nagging worry to us that perhaps the GVG-AI competition might be too easy, especially since performance profiling suggests that significant increases in number of MCTS iterations that can be completed in a given time limit will be possible through optimizations to the GVG-AI competition framework. To better understand the potential performance of the baseline vanilla MCTS controller, I perform scaling experiments, running it against the 62 games in the public GVG-AI corpus as the time budget is varied from about 1/30 of that in the current competition, through around 30x the current competition's budget. I find that it does not in fact master the games even given 30x the current time budget, so the challenge of the GVG-AI competition is safe (at least against this baseline). However, I do find that given enough computational budget, it manages to avoid explicitly losing on most games, despite failing to win them and ultimately losing as time expires, suggesting an asymmetry in the current GVG-AI competition's challenge: not losing is significantly easier than winning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"186 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76859933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860412
Hanneke Kersjes, P. Spronck
The behavior of virtual characters in computer games is usually determined solely by decision trees or finite state machines, which is detrimental to the characters' believability. It has been argued that enhancing the virtual characters with emotions, personalities, and moods, may make their behavior more diverse and thus more believable. Most research in this direction is based on existing (socio-)psychological literature, but not tested in a suitable experimental setting where humans interact with the virtual characters. In our research, we use a simplified version of the personality model of Ochs et al. [1], which we test in a game which has human participants interact with three agents with different personalities: an extraverted agent, a neurotic agent, and a neutral agent. The model only influences the agents' emotions, which are only exhibited by their facial expressions. The participants were asked to assess the agents' personality based on six possible traits. We found that the participants considered the neurotic agent as the most neurotic, while there are also indications that the extraverted agent was considered the most extraverted. We conclude that players will indeed distinguish personality differences between agents based on their facial expression of emotions. Therefore, using a personality model may make it easy for game developers to quickly create a high variety of virtual characters, who exhibit individual behaviors, making them more believable.
{"title":"Modeling believable game characters","authors":"Hanneke Kersjes, P. Spronck","doi":"10.1109/CIG.2016.7860412","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860412","url":null,"abstract":"The behavior of virtual characters in computer games is usually determined solely by decision trees or finite state machines, which is detrimental to the characters' believability. It has been argued that enhancing the virtual characters with emotions, personalities, and moods, may make their behavior more diverse and thus more believable. Most research in this direction is based on existing (socio-)psychological literature, but not tested in a suitable experimental setting where humans interact with the virtual characters. In our research, we use a simplified version of the personality model of Ochs et al. [1], which we test in a game which has human participants interact with three agents with different personalities: an extraverted agent, a neurotic agent, and a neutral agent. The model only influences the agents' emotions, which are only exhibited by their facial expressions. The participants were asked to assess the agents' personality based on six possible traits. We found that the participants considered the neurotic agent as the most neurotic, while there are also indications that the extraverted agent was considered the most extraverted. We conclude that players will indeed distinguish personality differences between agents based on their facial expression of emotions. Therefore, using a personality model may make it easy for game developers to quickly create a high variety of virtual characters, who exhibit individual behaviors, making them more believable.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"159 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77208657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860413
Mateusz Kurek, Wojciech Jaśkowski
Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.
{"title":"Heterogeneous team deep q-learning in low-dimensional multi-agent environments","authors":"Mateusz Kurek, Wojciech Jaśkowski","doi":"10.1109/CIG.2016.7860413","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860413","url":null,"abstract":"Deep Q-Learning is an effective reinforcement learning method, which has recently obtained human-level performance for a set of Atari 2600 games. Remarkably, the system was trained on the high-dimensional raw visual data. Is Deep Q-Learning equally valid for problems involving a low-dimensional state space? To answer this question, we evaluate the components of Deep Q-Learning (deep architecture, experience replay, target network freezing, and meta-state) on a Keepaway soccer problem, where the state is described only by 13 variables. The results indicate that although experience replay indeed improves the agent performance, target network freezing and meta-state slow down the learning process. Moreover, the deep architecture does not help for this task since a rather shallow network with just two hidden layers worked the best. By selecting the best settings, and employing heterogeneous team learning, we were able to outperform all previous methods applied to Keepaway soccer using a fraction of the runner-up's computational expense. These results extend our understanding of the Deep Q-Learning effectiveness for low-dimensional reinforcement learning tasks.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"74 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85188453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860438
E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson
We provide a proof of principle that novel and engaging mobile casual games with new aesthetics, game mechanics and player interactions can be designed and tested directly on the device for which they are intended. We describe the Gamika iOS application which includes generative art assets; a design interface enabling the making of physics-based casual games containing multiple levels with aspects ranging from Frogger-like to Asteroids-like and beyond; a configurable automated playtester which can give feedback on the playability of levels; and an automated fine-tuning engine which searches for level parameterisations that enable the game to pass a battery of tests, as evaluated by the auto-playtester. Each aspect of the implementation represents a baseline with much room for improvement, and we present some experimental results and describe how these will guide the future directions for Gamika.
{"title":"Semi-automated level design via auto-playtesting for handheld casual game creation","authors":"E. Powley, S. Colton, Swen E. Gaudl, Rob Saunders, M. Nelson","doi":"10.1109/CIG.2016.7860438","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860438","url":null,"abstract":"We provide a proof of principle that novel and engaging mobile casual games with new aesthetics, game mechanics and player interactions can be designed and tested directly on the device for which they are intended. We describe the Gamika iOS application which includes generative art assets; a design interface enabling the making of physics-based casual games containing multiple levels with aspects ranging from Frogger-like to Asteroids-like and beyond; a configurable automated playtester which can give feedback on the playability of levels; and an automated fine-tuning engine which searches for level parameterisations that enable the game to pass a battery of tests, as evaluated by the auto-playtester. Each aspect of the implementation represents a baseline with much room for improvement, and we present some experimental results and describe how these will guide the future directions for Gamika.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"101 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83152619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860384
Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss
In the General Video Game Playing competitions of the last years, Monte-Carlo tree search as well as Evolutionary Algorithm based controllers have been successful. However, both approaches have certain weaknesses, suggesting that certain hybrids could outperform both. We envision and experimentally compare several types of hybrids of two basic approaches, as well as some possible extensions. In order to achieve a better understanding of the games in the competition and the strength and weaknesses of different controllers, we also propose and apply a novel game difficulty estimation scheme based on several observable game characteristics.
{"title":"MCTS/EA hybrid GVGAI players and game difficulty estimation","authors":"Hendrik Horn, Vanessa Volz, Diego Perez Liebana, M. Preuss","doi":"10.1109/CIG.2016.7860384","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860384","url":null,"abstract":"In the General Video Game Playing competitions of the last years, Monte-Carlo tree search as well as Evolutionary Algorithm based controllers have been successful. However, both approaches have certain weaknesses, suggesting that certain hybrids could outperform both. We envision and experimentally compare several types of hybrids of two basic approaches, as well as some possible extensions. In order to achieve a better understanding of the games in the competition and the strength and weaknesses of different controllers, we also propose and apply a novel game difficulty estimation scheme based on several observable game characteristics.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"255 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89183411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860434
Cristinel Patrascu, S. Risi
Procedural content generation has shown promise in a variety of different games. In this paper we introduce a new kind of game, called Artefacts, that combines a sandbox-like environment akin to Minecraft with the ability to interactively evolve unique three-dimensional building blocks. Artefacts does not only allow players to collaborate by building larger structures from evolved objects but also to continue evolution of others' artefacts. Results from playtests on three different game iterations indicate that players generally enjoy playing the game and are able to discover a wide variety of different 3D objects. Morever, while there is no explicit goal in Artefacts, the sandbox environment together with the ability to evolve unique shapes does allow for some interesting gameplay to emerge.
{"title":"Artefacts: Minecraft meets collaborative interactive evolution","authors":"Cristinel Patrascu, S. Risi","doi":"10.1109/CIG.2016.7860434","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860434","url":null,"abstract":"Procedural content generation has shown promise in a variety of different games. In this paper we introduce a new kind of game, called Artefacts, that combines a sandbox-like environment akin to Minecraft with the ability to interactively evolve unique three-dimensional building blocks. Artefacts does not only allow players to collaborate by building larger structures from evolved objects but also to continue evolution of others' artefacts. Results from playtests on three different game iterations indicate that players generally enjoy playing the game and are able to discover a wide variety of different 3D objects. Morever, while there is no explicit goal in Artefacts, the sandbox environment together with the ability to evolve unique shapes does allow for some interesting gameplay to emerge.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"15 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81970931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860406
C. Guckelsberger, Christoph Salge, S. Colton
Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems.
{"title":"Intrinsically motivated general companion NPCs via Coupled Empowerment Maximisation","authors":"C. Guckelsberger, Christoph Salge, S. Colton","doi":"10.1109/CIG.2016.7860406","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860406","url":null,"abstract":"Non-player characters (NPCs) in games are traditionally hard-coded or dependent on pre-specified goals, and consequently struggle to behave sensibly in ever-changing and possibly unpredictable game worlds. To make them fit for new developments in procedural content generation, we introduce the principle of Coupled Empowerment Maximisation as an intrinsic motivation for game NPCs. We focus on the development of a general game companion, designed to support the player in achieving their goals. We evaluate our approach against three intuitive and abstract companion duties. We develop dedicated scenarios for each duty in a dungeon-crawler game testbed, and provide qualitative evidence that the emergent NPC behaviour fulfils these duties. We argue that this generic approach can speed up NPC AI development, improve automatic game evolution and introduce NPCs to full game-generation systems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"75 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76011561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860437
T. DeWitt, S. Louis, Siming Liu
This paper extends prior work in generating two dimensional micro for Real-Time Strategy games to three dimensions. We extend our influence map and potential fields representation to three dimensions and compare two hill-climbers with a genetic algorithm on the problem of generating high performance influence map, potential field, and reactive control parameters that control the behavior of units in an open source Real-Time Strategy game. Results indicate that genetic algorithms evolve better behaviors for ranged units that inflict damage on enemies while kiting to avoid damage. Additionally, genetic algorithms evolve better behaviors for melee units that concentrate firepower on selective enemies to decrease the opposing army's effectiveness. Evolved behaviors, particularly for ranged units, generalize well to new scenarios. Our work thus provides evidence for the viability of an influence map and potential fields based representation for reactive control algorithms in games, 3D simulations, and aerial vehicle swarms.
{"title":"Evolving micro for 3D Real-Time Strategy games","authors":"T. DeWitt, S. Louis, Siming Liu","doi":"10.1109/CIG.2016.7860437","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860437","url":null,"abstract":"This paper extends prior work in generating two dimensional micro for Real-Time Strategy games to three dimensions. We extend our influence map and potential fields representation to three dimensions and compare two hill-climbers with a genetic algorithm on the problem of generating high performance influence map, potential field, and reactive control parameters that control the behavior of units in an open source Real-Time Strategy game. Results indicate that genetic algorithms evolve better behaviors for ranged units that inflict damage on enemies while kiting to avoid damage. Additionally, genetic algorithms evolve better behaviors for melee units that concentrate firepower on selective enemies to decrease the opposing army's effectiveness. Evolved behaviors, particularly for ranged units, generalize well to new scenarios. Our work thus provides evidence for the viability of an influence map and potential fields based representation for reactive control algorithms in games, 3D simulations, and aerial vehicle swarms.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"14 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73991380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}