Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633627
M. Preuss, Daniel Kozakowski, Johan Hagelbäck, H. Trautmann
Current StarCraft bots are not very flexible in their strategy choice, most of them just follow a manually optimized one, usually a rush. We suggest a method of augmenting existing bots via Fuzzy Control in order to make them react on the current game situation. According to the available information, the best matching of a pool of strategies is chosen. While the method is very general and can be applied easily to many bots, we implement it for the existing BTHAI bot and show experimentally how the modifications affects its gameplay, and how it is improved compared to the original version.
{"title":"Reactive strategy choice in StarCraft by means of Fuzzy Control","authors":"M. Preuss, Daniel Kozakowski, Johan Hagelbäck, H. Trautmann","doi":"10.1109/CIG.2013.6633627","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633627","url":null,"abstract":"Current StarCraft bots are not very flexible in their strategy choice, most of them just follow a manually optimized one, usually a rush. We suggest a method of augmenting existing bots via Fuzzy Control in order to make them react on the current game situation. According to the available information, the best matching of a pool of strategies is chosen. While the method is very general and can be applied easily to many bots, we implement it for the existing BTHAI bot and show experimentally how the modifications affects its gameplay, and how it is improved compared to the original version.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132721714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633651
Antonios Liapis, H. P. Martínez, J. Togelius, Georgios N. Yannakakis
This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.
{"title":"Adaptive game level creation through rank-based interactive evolution","authors":"Antonios Liapis, H. P. Martínez, J. Togelius, Georgios N. Yannakakis","doi":"10.1109/CIG.2013.6633651","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633651","url":null,"abstract":"This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126076078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633622
Kyriakos Efthymiadis, D. Kudenko
StarCraft: Broodwar (SC:BW) is a very popular commercial real strategy game (RTS) which has been extensively used in AI research. Despite being a popular test-bed reinforcement learning (RL) has not been evaluated extensively. A successful attempt was made to show the use of RL in a small-scale combat scenario involving an overpowered agent battling against multiple enemy units [1]. However, the chosen scenario was very small and not representative of the complexity of the game in its entirety. In order to build an RL agent that can manage the complexity of the full game, more efficient approaches must be used to tackle the state-space explosion. In this paper, we demonstrate how plan-based reward shaping can help an agent scale up to larger, more complex scenarios and significantly speed up the learning process as well as how high level planning can be combined with learning focusing on learning the Starcraft strategy, Battlecruiser Rush. We empirically show that the agent with plan-based reward shaping is significantly better both in terms of the learnt policy, as well as convergence speed when compared to baseline approaches which fail at reaching a good enough policy within a practical amount of time.
{"title":"Using plan-based reward shaping to learn strategies in StarCraft: Broodwar","authors":"Kyriakos Efthymiadis, D. Kudenko","doi":"10.1109/CIG.2013.6633622","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633622","url":null,"abstract":"StarCraft: Broodwar (SC:BW) is a very popular commercial real strategy game (RTS) which has been extensively used in AI research. Despite being a popular test-bed reinforcement learning (RL) has not been evaluated extensively. A successful attempt was made to show the use of RL in a small-scale combat scenario involving an overpowered agent battling against multiple enemy units [1]. However, the chosen scenario was very small and not representative of the complexity of the game in its entirety. In order to build an RL agent that can manage the complexity of the full game, more efficient approaches must be used to tackle the state-space explosion. In this paper, we demonstrate how plan-based reward shaping can help an agent scale up to larger, more complex scenarios and significantly speed up the learning process as well as how high level planning can be combined with learning focusing on learning the Starcraft strategy, Battlecruiser Rush. We empirically show that the agent with plan-based reward shaping is significantly better both in terms of the learnt policy, as well as convergence speed when compared to baseline approaches which fail at reaching a good enough policy within a practical amount of time.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125028997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633608
Stephen Wiens, J. Denzinger, Sanjeev Paskaradevan
We present two improvements to the hybrid learning method for the shout-ahead architecture for units in the game Battle for Wesnoth. The shout-ahead architecture allows for units to perform decision making in two stages, first determining an action without knowledge of the intentions of other units, then, after communicating the intended action and likewise receiving the intentions of the other units, taking these intentions into account for the final decision on the next action. The decision making uses two rule sets and reinforcement learning is used to learn rule weights (that influence decision making), while evolutionary learning is used to evolve good rule sets. Our improvements add knowledge about terrain to the learning and also evaluate unit behaviors on several scenario maps to learn more general rules. The use of terrain knowledge resulted in improvements in the win percentage of evolved teams between 3 and 14 percentage points for different maps, while using several maps to learn from resulted in nearly similar win percentages on maps not learned from as on the maps learned from.
我们对游戏《Battle for Wesnoth》中单位的呼喊提前架构的混合学习方法进行了两项改进。提前喊话架构允许单位分两个阶段执行决策,首先在不知道其他单位意图的情况下决定行动,然后在传达预期行动并同样接收到其他单位的意图后,将这些意图考虑到下一个行动的最终决定。决策使用两个规则集,强化学习用于学习规则权重(影响决策),而进化学习用于进化好的规则集。我们的改进增加了关于地形的知识,并在几个场景地图上评估单位的行为,以学习更多的一般规则。地形知识的使用使进化团队在不同地图上的胜率提高了3到14个百分点,而使用几张地图进行学习,在没有学习的地图上的胜率和学习过的地图上的胜率几乎相同。
{"title":"Creating large numbers of game AIs by learning behavior for cooperating units","authors":"Stephen Wiens, J. Denzinger, Sanjeev Paskaradevan","doi":"10.1109/CIG.2013.6633608","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633608","url":null,"abstract":"We present two improvements to the hybrid learning method for the shout-ahead architecture for units in the game Battle for Wesnoth. The shout-ahead architecture allows for units to perform decision making in two stages, first determining an action without knowledge of the intentions of other units, then, after communicating the intended action and likewise receiving the intentions of the other units, taking these intentions into account for the final decision on the next action. The decision making uses two rule sets and reinforcement learning is used to learn rule weights (that influence decision making), while evolutionary learning is used to evolve good rule sets. Our improvements add knowledge about terrain to the learning and also evaluate unit behaviors on several scenario maps to learn more general rules. The use of terrain knowledge resulted in improvements in the win percentage of evolved teams between 3 and 14 percentage points for different maps, while using several maps to learn from resulted in nearly similar win percentages on maps not learned from as on the maps learned from.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128381331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633624
G. Maggiore, Carlos Santos, D. Dini, Frank Peters, Ha Bouwknegt, P. Spronck
One of the main aims of game AI research is the building of challenging and believable artificial opponents that act as if capable of strategic thinking. In this paper we describe a novel mechanism that successfully endows NPCs in real-time games with strategic planning capabilities. Our approach creates adaptive behaviours that take into account long-term and short term consequences. Our approach is unique in that: (i) it is sufficiently fast to be used for hundreds of agents in real time; (ii) it is flexible in that it requires no previous knowledge of the playing field; and (iii) it allows customization of the agents in order to generate differentiated behaviours that derive from virtual personalities.
{"title":"LGOAP: Adaptive layered planning for real-time videogames","authors":"G. Maggiore, Carlos Santos, D. Dini, Frank Peters, Ha Bouwknegt, P. Spronck","doi":"10.1109/CIG.2013.6633624","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633624","url":null,"abstract":"One of the main aims of game AI research is the building of challenging and believable artificial opponents that act as if capable of strategic thinking. In this paper we describe a novel mechanism that successfully endows NPCs in real-time games with strategic planning capabilities. Our approach creates adaptive behaviours that take into account long-term and short term consequences. Our approach is unique in that: (i) it is sufficiently fast to be used for hundreds of agents in real time; (ii) it is flexible in that it requires no previous knowledge of the playing field; and (iii) it allows customization of the agents in order to generate differentiated behaviours that derive from virtual personalities.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115483097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633633
M. Shaker, Mhd Hasan Sarhan, Ola Al Naameh, Noor Shaker, J. Togelius
In this paper we present a method for the automatic generation of content for the physics-based puzzle game Cut The Rope. An evolutionary game generator is implemented which evolves the design of levels based on a context-free grammar. We present various measures for analyzing the expressivity of the generator and visualizing the space of content covered. We further perform an experiment on evolving playable content of the game and present and analyze the results obtained.
{"title":"Automatic generation and analysis of physics-based puzzle games","authors":"M. Shaker, Mhd Hasan Sarhan, Ola Al Naameh, Noor Shaker, J. Togelius","doi":"10.1109/CIG.2013.6633633","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633633","url":null,"abstract":"In this paper we present a method for the automatic generation of content for the physics-based puzzle game Cut The Rope. An evolutionary game generator is implemented which evolves the design of levels based on a context-free grammar. We present various measures for analyzing the expressivity of the generator and visualizing the space of content covered. We further perform an experiment on evolving playable content of the game and present and analyze the results obtained.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633632
Brent E. Harrison, D. Roberts
This paper shows how game analytics can be used in conjunction with an adaptive system in order to increase player retention at the level of individual game sessions in Scrabblesque, a Flash game based on the popular board game Scrabble. In this paper, we use game analytic knowledge to create a simplified search space (called the game analytic space) of board states. We then target a distribution of game analytic states that are predictive of players playing a complete game session of Scrabblesque in order to increase player retention. Our adaptive system then has a computer-controlled AI opponent take moves that will help realize this distribution of game analytic states with the ultimate goal of reducing the quitting rate. We test this system by performing a user study in which we compare how many people quit playing the adaptive version of Scrabblesque early and how many people quit playing a nonadaptive version of Scrabblesque early. We also compare how well the adaptive version of Scrabblesque was able to influence player behavior as described by game analytics. Our results show that our adaptive system is able to produce a significant reduction in the quitting rate (p = 0.03) when compared to the non-adaptive version. In addition, the adaptive version of Scrabblesque is able to better fit a target distribution of game analytic states when compared to the non-adaptive version.
{"title":"Analytics-driven dynamic game adaption for player retention in Scrabble","authors":"Brent E. Harrison, D. Roberts","doi":"10.1109/CIG.2013.6633632","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633632","url":null,"abstract":"This paper shows how game analytics can be used in conjunction with an adaptive system in order to increase player retention at the level of individual game sessions in Scrabblesque, a Flash game based on the popular board game Scrabble. In this paper, we use game analytic knowledge to create a simplified search space (called the game analytic space) of board states. We then target a distribution of game analytic states that are predictive of players playing a complete game session of Scrabblesque in order to increase player retention. Our adaptive system then has a computer-controlled AI opponent take moves that will help realize this distribution of game analytic states with the ultimate goal of reducing the quitting rate. We test this system by performing a user study in which we compare how many people quit playing the adaptive version of Scrabblesque early and how many people quit playing a nonadaptive version of Scrabblesque early. We also compare how well the adaptive version of Scrabblesque was able to influence player behavior as described by game analytics. Our results show that our adaptive system is able to produce a significant reduction in the quitting rate (p = 0.03) when compared to the non-adaptive version. In addition, the adaptive version of Scrabblesque is able to better fit a target distribution of game analytic states when compared to the non-adaptive version.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125782267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633612
Alessandro Canossa, Josep B. Martinez, J. Togelius
Recently both game industry professionals and academic researchers have started focusing on player-generated behavioral data as a mean to gather insights on player psychology through datamining. Although some research has already proven solid correlations between in-game behavior and personality, most techniques focus on extracting knowledge from in-game behavior data alone. This paper posits that triangulating exclusively behavioral datasets with established theoretical frameworks serving as hermeneutic grids, may help extracting additional meaning and information. The hermeneutic grid selected for this study is the Reiss Motivation Profiler and it is applied to behavioral data gathered from Minecraft players.
{"title":"Give me a reason to dig Minecraft and psychology of motivation","authors":"Alessandro Canossa, Josep B. Martinez, J. Togelius","doi":"10.1109/CIG.2013.6633612","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633612","url":null,"abstract":"Recently both game industry professionals and academic researchers have started focusing on player-generated behavioral data as a mean to gather insights on player psychology through datamining. Although some research has already proven solid correlations between in-game behavior and personality, most techniques focus on extracting knowledge from in-game behavior data alone. This paper posits that triangulating exclusively behavioral datasets with established theoretical frameworks serving as hermeneutic grids, may help extracting additional meaning and information. The hermeneutic grid selected for this study is the Reiss Motivation Profiler and it is applied to behavioral data gathered from Minecraft players.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132164739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633643
David Churchill, M. Buro
Real-time strategy video games have proven to be a very challenging area for applications of artificial intelligence research. With their vast state and action spaces and real-time constraints, existing AI solutions have been shown to be too slow, or only able to be applied to small problem sets, while human players still dominate RTS AI systems. This paper makes three contributions to advancing the state of AI for popular commercial RTS game combat, which can consist of battles of dozens of units. First, we present an efficient system for modelling abstract RTS combat called SparCraft, which can perform millions of unit actions per second and visualize them. We then present a modification of the UCT algorithm capable of performing search in games with simultaneous and durative actions. Finally, a novel greedy search algorithm called Portfolio Greedy Search is presented which uses hill climbing and accurate playout-based evaluations to efficiently search even the largest combat scenarios. We demonstrate that Portfolio Greedy Search outperforms state of the art Alpha-Beta and UCT search methods for large StarCraft combat scenarios of up to 50 vs. 50 units under real-time search constraints of 40 ms per search episode.
{"title":"Portfolio greedy search and simulation for large-scale combat in starcraft","authors":"David Churchill, M. Buro","doi":"10.1109/CIG.2013.6633643","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633643","url":null,"abstract":"Real-time strategy video games have proven to be a very challenging area for applications of artificial intelligence research. With their vast state and action spaces and real-time constraints, existing AI solutions have been shown to be too slow, or only able to be applied to small problem sets, while human players still dominate RTS AI systems. This paper makes three contributions to advancing the state of AI for popular commercial RTS game combat, which can consist of battles of dozens of units. First, we present an efficient system for modelling abstract RTS combat called SparCraft, which can perform millions of unit actions per second and visualize them. We then present a modification of the UCT algorithm capable of performing search in games with simultaneous and durative actions. Finally, a novel greedy search algorithm called Portfolio Greedy Search is presented which uses hill climbing and accurate playout-based evaluations to efficiently search even the largest combat scenarios. We demonstrate that Portfolio Greedy Search outperforms state of the art Alpha-Beta and UCT search methods for large StarCraft combat scenarios of up to 50 vs. 50 units under real-time search constraints of 40 ms per search episode.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134172644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-10-17DOI: 10.1109/CIG.2013.6633629
Anders Drachen, Matthias Schubert
The recently emerged field of game analytics and the development and adaptation of business intelligence techniques to support game design and development has given data-driven techniques a direct role in game development. Given that all digital games contain some sort of spatial operation, techniques for spatial analysis had their share in these developments. However, the methods for analyzing and visualizing spatial and spatio-temporal patterns in player behavior being used by the game industry are not as diverse as the range of techniques utilized in game research, leaving room for a continuing development. This paper presents a review of current work on spatial and spatio-temporal game analytics across industry and research, describing and defining the key terminology, outlining current techniques and their application. We summarize the current problems and challenges in the field, and present four key areas of spatial and spatio-temporal analytics: Spatial Outlier Detection, Spatial Clustering, Spatial Predictive Models, Spatial Pattern and Rule Mining. All key areas are well-established outside the context of games and hold the potential to reshape the research roadmap in game analytics.
{"title":"Spatial game analytics and visualization","authors":"Anders Drachen, Matthias Schubert","doi":"10.1109/CIG.2013.6633629","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633629","url":null,"abstract":"The recently emerged field of game analytics and the development and adaptation of business intelligence techniques to support game design and development has given data-driven techniques a direct role in game development. Given that all digital games contain some sort of spatial operation, techniques for spatial analysis had their share in these developments. However, the methods for analyzing and visualizing spatial and spatio-temporal patterns in player behavior being used by the game industry are not as diverse as the range of techniques utilized in game research, leaving room for a continuing development. This paper presents a review of current work on spatial and spatio-temporal game analytics across industry and research, describing and defining the key terminology, outlining current techniques and their application. We summarize the current problems and challenges in the field, and present four key areas of spatial and spatio-temporal analytics: Spatial Outlier Detection, Spatial Clustering, Spatial Predictive Models, Spatial Pattern and Rule Mining. All key areas are well-established outside the context of games and hold the potential to reshape the research roadmap in game analytics.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}