Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860415
Noor Shaker, Mohamed Abou-Zleikha
Several studies on cross-domain users' behaviour revealed generic personality trails and behavioural patterns. This paper, proposes quantitative approaches to use the knowledge of player behaviour in one game to seed the process of building player experience models in another. We investigate two settings: in the supervised feature mapping method, we use labeled datasets about players' behaviour in two games. The goal is to establish a mapping between the features so that the models build on one dataset could be used on the other by simple feature replacement. For the unsupervised transfer learning scenario, our goal is to find a shared space of correlated features based on unlabelled data. The features in the shared space are then used to construct models for one game that directly work on the transferred features of the other game. We implemented and analysed the two approaches and we show that transferring the knowledge of player experience between domains is indeed possible and ultimately useful when studying players' behaviour and when designing user studies.
{"title":"Transfer learning for cross-game prediction of player experience","authors":"Noor Shaker, Mohamed Abou-Zleikha","doi":"10.1109/CIG.2016.7860415","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860415","url":null,"abstract":"Several studies on cross-domain users' behaviour revealed generic personality trails and behavioural patterns. This paper, proposes quantitative approaches to use the knowledge of player behaviour in one game to seed the process of building player experience models in another. We investigate two settings: in the supervised feature mapping method, we use labeled datasets about players' behaviour in two games. The goal is to establish a mapping between the features so that the models build on one dataset could be used on the other by simple feature replacement. For the unsupervised transfer learning scenario, our goal is to find a shared space of correlated features based on unlabelled data. The features in the shared space are then used to construct models for one game that directly work on the transferred features of the other game. We implemented and analysed the two approaches and we show that transferring the knowledge of player experience between domains is indeed possible and ultimately useful when studying players' behaviour and when designing user studies.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"100 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80589726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860426
P. García-Sánchez, A. Tonda, Giovanni Squillero, A. García, J. J. M. Guervós
One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.
{"title":"Evolutionary deckbuilding in hearthstone","authors":"P. García-Sánchez, A. Tonda, Giovanni Squillero, A. García, J. J. M. Guervós","doi":"10.1109/CIG.2016.7860426","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860426","url":null,"abstract":"One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88678100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860427
Naoyuki Sato, Kokolo Ikeda
Turn-based strategy games are interesting testbeds for developing artificial players because their rules present developers with several challenges. Currently, Monte-Carlo tree search variants are often utilized to address these challenges. However, we consider it worthwhile introducing minimax search variants with pruning techniques because a turn-based strategy is in some points similar to the games of chess and Shogi, in which minimax variants are known to be effective. Thus, we introduced three forward-pruning techniques to enable us to apply alpha beta search (as a minimax search variant) to turn-based strategy games. This type of search involves fixing unit action orders, generating unit actions selectively, and limiting the number of moving units in a search. We applied our proposed pruning methods by implementing an alpha beta-based artificial player in the Turn-based strategy Academic Package (TUBSTAP) open platform of our institute. This player competed against first- and second-rank players in the TUBSTAP AI competition in 2016. Our proposed player won against the other players in five different maps with an average winning ratio exceeding 70%.
{"title":"Three types of forward pruning techniques to apply the alpha beta algorithm to turn-based strategy games","authors":"Naoyuki Sato, Kokolo Ikeda","doi":"10.1109/CIG.2016.7860427","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860427","url":null,"abstract":"Turn-based strategy games are interesting testbeds for developing artificial players because their rules present developers with several challenges. Currently, Monte-Carlo tree search variants are often utilized to address these challenges. However, we consider it worthwhile introducing minimax search variants with pruning techniques because a turn-based strategy is in some points similar to the games of chess and Shogi, in which minimax variants are known to be effective. Thus, we introduced three forward-pruning techniques to enable us to apply alpha beta search (as a minimax search variant) to turn-based strategy games. This type of search involves fixing unit action orders, generating unit actions selectively, and limiting the number of moving units in a search. We applied our proposed pruning methods by implementing an alpha beta-based artificial player in the Turn-based strategy Academic Package (TUBSTAP) open platform of our institute. This player competed against first- and second-rank players in the TUBSTAP AI competition in 2016. Our proposed player won against the other players in five different maps with an average winning ratio exceeding 70%.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"115 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91129640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860387
Hyun-Soo Park, Kyung-Joong Kim
Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.
{"title":"Deep Q-learning using redundant outputs in visual doom","authors":"Hyun-Soo Park, Kyung-Joong Kim","doi":"10.1109/CIG.2016.7860387","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860387","url":null,"abstract":"Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"4 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75474999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860439
Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro
Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.
{"title":"Evaluating real-time strategy game states using convolutional neural networks","authors":"Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro","doi":"10.1109/CIG.2016.7860439","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860439","url":null,"abstract":"Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"35 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77467518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860421
Thibault Allart, G. Levieux, M. Pierfitte, Agathe Guilloux, S. Natkin
This paper proposes a method to help understanding the influence of a game design on player retention. Using Far Cry® 4 data, we illustrate how playtime measures can be used to identify time periods where players are more likely to stop playing. First, we show that a benchmark can easily be performed for every game available on Steam using publicly available data. Then, we introduce how survival analysis can help to model the influence of game variables on player retention. Game environment and player characteristics change over time and tracking systems already store those changes. But existing model which deals with time varying covariate cannot scale on huge datasets produced by video game monitoring. That is why we propose a model that can both deal with time varying covariates and is well suited for big datasets. As a given game variable can have a changing effect over time, we also include time-varying coefficients in our model. We used this survival analysis model to quantify the effect of Far Cry 4 weapons usage on player retention.
{"title":"Design influence on player retention: A method based on time varying survival analysis","authors":"Thibault Allart, G. Levieux, M. Pierfitte, Agathe Guilloux, S. Natkin","doi":"10.1109/CIG.2016.7860421","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860421","url":null,"abstract":"This paper proposes a method to help understanding the influence of a game design on player retention. Using Far Cry® 4 data, we illustrate how playtime measures can be used to identify time periods where players are more likely to stop playing. First, we show that a benchmark can easily be performed for every game available on Steam using publicly available data. Then, we introduce how survival analysis can help to model the influence of game variables on player retention. Game environment and player characteristics change over time and tracking systems already store those changes. But existing model which deals with time varying covariate cannot scale on huge datasets produced by video game monitoring. That is why we propose a model that can both deal with time varying covariates and is well suited for big datasets. As a given game variable can have a changing effect over time, we also include time-varying coefficients in our model. We used this survival analysis model to quantify the effect of Far Cry 4 weapons usage on player retention.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"22 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74695744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860436
Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi
Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.
{"title":"Breeding a diversity of Super Mario behaviors through interactive evolution","authors":"Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi","doi":"10.1109/CIG.2016.7860436","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860436","url":null,"abstract":"Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79888564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860419
Ahmed Abdelkader
Pursuit-evasion games encompass a wide range of planning problems with a variety of constraints on the motion of agents. We study the visibility-based variant where a pursuer is required to keep an evader in sight, while the evader is assumed to attempt to hide as soon as possible. This is particularly relevant in the context of video games where non-player characters of varying skill levels frequently chase after and attack the player. In this paper, we show that a simple dual formulation of the problem can be integrated into the traditional model to derive optimal strategies that tolerate interruptions in visibility resulting from motion among obstacles. Furthermore, using the enhanced model we propose a competitive procedure to maintain the optimal strategies in a dynamic environment where obstacles can change both shape and location. We prove the correctness of our algorithms and present results for different maps.
{"title":"Recovering visibility and dodging obstacles in pursuit-evasion games","authors":"Ahmed Abdelkader","doi":"10.1109/CIG.2016.7860419","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860419","url":null,"abstract":"Pursuit-evasion games encompass a wide range of planning problems with a variety of constraints on the motion of agents. We study the visibility-based variant where a pursuer is required to keep an evader in sight, while the evader is assumed to attempt to hide as soon as possible. This is particularly relevant in the context of video games where non-player characters of varying skill levels frequently chase after and attack the player. In this paper, we show that a simple dual formulation of the problem can be integrated into the traditional model to derive optimal strategies that tolerate interruptions in visibility resulting from motion among obstacles. Furthermore, using the enhanced model we propose a competitive procedure to maintain the optimal strategies in a dynamic environment where obstacles can change both shape and location. We prove the correctness of our algorithms and present results for different maps.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84097165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860449
C. Chu, Suguru Ito, Tomohiro Harada, R. Thawonmas
This paper proposes an application of reinforcement learning and position-based features in rollout bias training of Monte-Carlo Tree Search (MCTS) for General Video Game Playing (GVGP). As an improvement on Knowledge-based Fast-Evo MCTS proposed by Perez et al., the proposed method is designated for both the GVG-AI Competition and improvement of the learning mechanism of the original method. The performance of the proposed method is evaluated empirically, using all games from six training sets available in the GVG-AI Framework, and the proposed method achieves better scores than five other existing MCTS-based methods overall.
{"title":"Position-based reinforcement learning biased MCTS for General Video Game Playing","authors":"C. Chu, Suguru Ito, Tomohiro Harada, R. Thawonmas","doi":"10.1109/CIG.2016.7860449","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860449","url":null,"abstract":"This paper proposes an application of reinforcement learning and position-based features in rollout bias training of Monte-Carlo Tree Search (MCTS) for General Video Game Playing (GVGP). As an improvement on Knowledge-based Fast-Evo MCTS proposed by Perez et al., the proposed method is designated for both the GVG-AI Competition and improvement of the learning mechanism of the original method. The performance of the proposed method is evaluated empirically, using all games from six training sets available in the GVG-AI Framework, and the proposed method achieves better scores than five other existing MCTS-based methods overall.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"24 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82979627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860442
A. Saas, Anna Guitart, Á. Periáñez
The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.
{"title":"Discovering playing patterns: Time series clustering of free-to-play game data","authors":"A. Saas, Anna Guitart, Á. Periáñez","doi":"10.1109/CIG.2016.7860442","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860442","url":null,"abstract":"The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"109 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80945249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}