Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860439
Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro
Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.
{"title":"Evaluating real-time strategy game states using convolutional neural networks","authors":"Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro","doi":"10.1109/CIG.2016.7860439","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860439","url":null,"abstract":"Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"35 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77467518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860436
Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi
Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.
{"title":"Breeding a diversity of Super Mario behaviors through interactive evolution","authors":"Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi","doi":"10.1109/CIG.2016.7860436","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860436","url":null,"abstract":"Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79888564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860387
Hyun-Soo Park, Kyung-Joong Kim
Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.
{"title":"Deep Q-learning using redundant outputs in visual doom","authors":"Hyun-Soo Park, Kyung-Joong Kim","doi":"10.1109/CIG.2016.7860387","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860387","url":null,"abstract":"Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"4 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75474999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860442
A. Saas, Anna Guitart, Á. Periáñez
The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.
{"title":"Discovering playing patterns: Time series clustering of free-to-play game data","authors":"A. Saas, Anna Guitart, Á. Periáñez","doi":"10.1109/CIG.2016.7860442","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860442","url":null,"abstract":"The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"109 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80945249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860401
Shuo Xu, Clark Verbrugge
Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.
{"title":"Heuristics for sleep and heal in combat","authors":"Shuo Xu, Clark Verbrugge","doi":"10.1109/CIG.2016.7860401","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860401","url":null,"abstract":"Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"222 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79290046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860425
Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz
Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.
{"title":"An integrated process for game balancing","authors":"Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz","doi":"10.1109/CIG.2016.7860425","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860425","url":null,"abstract":"Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"29 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81337526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860402
G. Greenwood
Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.
{"title":"Altruistic punishment can help resolve tragedy of the commons social dilemmas","authors":"G. Greenwood","doi":"10.1109/CIG.2016.7860402","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860402","url":null,"abstract":"Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"41 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87516042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860418
Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson
The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.
{"title":"Biometrics and classifier fusion to predict the fun-factor in video gaming","authors":"Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson","doi":"10.1109/CIG.2016.7860418","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860418","url":null,"abstract":"The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82107463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860395
Dennis J. N. J. Soemers, M. Winands
Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.
{"title":"Hierarchical Task Network Plan Reuse for video games","authors":"Dennis J. N. J. Soemers, M. Winands","doi":"10.1109/CIG.2016.7860395","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860395","url":null,"abstract":"Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"52 79 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80422430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860435
Theodosis Georgiou, Y. Demiris
Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.
{"title":"Personalised track design in car racing games","authors":"Theodosis Georgiou, Y. Demiris","doi":"10.1109/CIG.2016.7860435","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860435","url":null,"abstract":"Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"49 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83368588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}