Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2364258
H. Prendinger, Kamthorn Puntumapon, Marconi Madruga Filho
Multiplayer games are an important and popular game mode for networked players. Since games are played by a diverse audience, it is important to scale the difficulty, or challenge, according to the skill level of the players. However, current approaches to real-time challenge balancing (RCB) in games are only applicable to single-player scenarios. In multiplayer scenarios, players with different skill levels may be present in the same area, and hence adjusting the game difficulty to match the skill of one player may affect the other players in an undesirable way. To address this problem, we have previously developed a new approach based on distributed constraint optimization, which achieves the optimal challenge level for multiple players in real-time. The main contribution of this paper is an experiment that was performed with our new multiplayer real-time challenge balancing method applied to eco-driving. The results of the experiment suggest the effectiveness of RCB.
{"title":"Extending Real-Time Challenge Balancing to Multiplayer Games: A Study on Eco-Driving","authors":"H. Prendinger, Kamthorn Puntumapon, Marconi Madruga Filho","doi":"10.1109/TCIAIG.2014.2364258","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2364258","url":null,"abstract":"Multiplayer games are an important and popular game mode for networked players. Since games are played by a diverse audience, it is important to scale the difficulty, or challenge, according to the skill level of the players. However, current approaches to real-time challenge balancing (RCB) in games are only applicable to single-player scenarios. In multiplayer scenarios, players with different skill levels may be present in the same area, and hence adjusting the game difficulty to match the skill of one player may affect the other players in an undesirable way. To address this problem, we have previously developed a new approach based on distributed constraint optimization, which achieves the optimal challenge level for multiple players in real-time. The main contribution of this paper is an experiment that was performed with our new multiplayer real-time challenge balancing method applied to eco-driving. The results of the experiment suggest the effectiveness of RCB.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"27-32"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2364258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2331972
G. Jaskiewicz
κ-labs is a research project exploring the possibilities of the logic programming paradigm in bot behavior programming for first-person shooter (FPS) games. The focus of previous work was to make Prolog a usable tool for bot programming and a baseline for further extensions. This paper presents one such extension, which makes it possible to script tactics of the entire team of bots. The algorithm was tested by bot-to-bot computer tests and by running surveys among human players who volunteered to take part in the research. The results of the both tests are presented in this paper. The extension itself demonstrates the flexibility of the framework. Although the proposed method for defining team behaviors relies solely on the knowledge of the bot's designer, alternative approaches, which use rules that are obtained by computational techniques, can also be developed. Such approaches are also being investigated as part of the κ-labs project.
{"title":"Prolog-Scripted Tactics Negotiation and Coordinated Team Actions for Counter-Strike Game Bots","authors":"G. Jaskiewicz","doi":"10.1109/TCIAIG.2014.2331972","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2331972","url":null,"abstract":"κ-labs is a research project exploring the possibilities of the logic programming paradigm in bot behavior programming for first-person shooter (FPS) games. The focus of previous work was to make Prolog a usable tool for bot programming and a baseline for further extensions. This paper presents one such extension, which makes it possible to script tactics of the entire team of bots. The algorithm was tested by bot-to-bot computer tests and by running surveys among human players who volunteered to take part in the research. The results of the both tests are presented in this paper. The extension itself demonstrates the flexibility of the framework. Although the proposed method for defining team behaviors relies solely on the knowledge of the bot's designer, alternative approaches, which use rules that are obtained by computational techniques, can also be developed. Such approaches are also being investigated as part of the κ-labs project.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"20 1","pages":"82-88"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2331972","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2368392
Michele Pirovano, R. Mainetti, G. Baud-Bovy, P. Lanzi, N. A. Borghese
Computer games are a promising tool to support intensive rehabilitation. However, at present, they do not incorporate the supervision provided by a real therapist and do not allow safe and effective use at a patient's home. We show how specifically tailored computational intelligence based techniques allow extending exergames with functionalities that make rehabilitation at home effective and safe. The main function is in monitoring the correctness of motion, which is fundamental in avoiding developing wrong motion patterns, making rehabilitation more harmful than effective. Fuzzy systems enable us to capture the knowledge of the therapist and to provide real-time feedback of the patient's motion quality with a novel informative color coding applied to the patient's avatar. This feedback is complemented with a therapist avatar that, in extreme cases, explains the correct way to carry out the movements required by the exergames. The avatar also welcomes the patient and summarizes the therapy results to him/her. Text to speech and simple animation improve the engagement. Another important element is adaptation. Only the proper level of challenge exercises can be both effective and safe. For this reason exergames can be fully configured by therapists in terms of speed, range of motion, or accuracy. These parameters are then tuned during exercise to the patient's performance through a Bayesian framework that also takes into account input from the therapist. A log of all the interaction data is stored for clinicians to assess and tune the therapy, and to advise patients. All this functionality has been added to a classical game engine that is extended to embody a virtual therapist aimed at supervising the motion, which is the final goal of the exergames for rehabilitation. This approach can be of broad interest in the serious games domain. Preliminary results with patients and therapists suggest that the approach can maintain a proper challenge level while keeping the patient motivated, safe, and supervised.
{"title":"Intelligent Game Engine for Rehabilitation (IGER)","authors":"Michele Pirovano, R. Mainetti, G. Baud-Bovy, P. Lanzi, N. A. Borghese","doi":"10.1109/TCIAIG.2014.2368392","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2368392","url":null,"abstract":"Computer games are a promising tool to support intensive rehabilitation. However, at present, they do not incorporate the supervision provided by a real therapist and do not allow safe and effective use at a patient's home. We show how specifically tailored computational intelligence based techniques allow extending exergames with functionalities that make rehabilitation at home effective and safe. The main function is in monitoring the correctness of motion, which is fundamental in avoiding developing wrong motion patterns, making rehabilitation more harmful than effective. Fuzzy systems enable us to capture the knowledge of the therapist and to provide real-time feedback of the patient's motion quality with a novel informative color coding applied to the patient's avatar. This feedback is complemented with a therapist avatar that, in extreme cases, explains the correct way to carry out the movements required by the exergames. The avatar also welcomes the patient and summarizes the therapy results to him/her. Text to speech and simple animation improve the engagement. Another important element is adaptation. Only the proper level of challenge exercises can be both effective and safe. For this reason exergames can be fully configured by therapists in terms of speed, range of motion, or accuracy. These parameters are then tuned during exercise to the patient's performance through a Bayesian framework that also takes into account input from the therapist. A log of all the interaction data is stored for clinicians to assess and tune the therapy, and to advise patients. All this functionality has been added to a classical game engine that is extended to embody a virtual therapist aimed at supervising the motion, which is the final goal of the exergames for rehabilitation. This approach can be of broad interest in the serious games domain. Preliminary results with patients and therapists suggest that the approach can maintain a proper challenge level while keeping the patient motivated, safe, and supervised.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"43-55"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2368392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2365470
J. Togelius
Game-based competitions are commonly used within the Computational Intelligence (CI) and Artificial Intelligence (AI) in games community to benchmark algorithms and to attract new researchers. While many competitions have been organized based on different games, the success of these competitions is highly varied. This short paper is a self-help paper for competition organizers and aspiring competition organizers. After analyzing the fate of a number of recent competitions, some factors likely to contribute to the success or failure of a competition are laid out, and a set of concrete recommendations is offered. There is also a discussion of how to write up game-based AI competitions and what we can ultimately learn from them.
{"title":"How to Run a Successful Game-Based AI Competition","authors":"J. Togelius","doi":"10.1109/TCIAIG.2014.2365470","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2365470","url":null,"abstract":"Game-based competitions are commonly used within the Computational Intelligence (CI) and Artificial Intelligence (AI) in games community to benchmark algorithms and to attract new researchers. While many competitions have been organized based on different games, the success of these competitions is highly varied. This short paper is a self-help paper for competition organizers and aspiring competition organizers. After analyzing the fate of a number of recent competitions, some factors likely to contribute to the success or failure of a competition are laid out, and a set of concrete recommendations is offered. There is also a discussion of how to write up game-based AI competitions and what we can ultimately learn from them.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"95-100"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2365470","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2367105
Samineh Bagheri, Markus Thill, P. Koch, W. Konen
Learning board games by self-play has a long tradition in computational intelligence for games. Based on Tesauro's seminal success with TD-Gammon in 1994, many successful agents use temporal difference learning today. But in order to be successful with temporal difference learning on game tasks, often a careful selection of features and a large number of training games is necessary. Even for board games of moderate complexity like Connect-4, we found in previous work that a very rich initial feature set and several millions of game plays are required. In this work we investigate different approaches of online-adaptable learning rates like Incremental Delta Bar Delta (IDBD) or temporal coherence learning (TCL) whether they have the potential to speed up learning for such a complex task. We propose a new variant of TCL with geometric step size changes. We compare those algorithms with several other state-of-the-art learning rate adaptation algorithms and perform a case study on the sensitivity with respect to their meta parameters. We show that in this set of learning algorithms those with geometric step size changes outperform those other algorithms with constant step size changes. Algorithms with nonlinear output functions are slightly better than linear ones. Algorithms with geometric step size changes learn faster by a factor of 4 as compared to previously published results on the task Connect-4.
{"title":"Online Adaptable Learning Rates for the Game Connect-4","authors":"Samineh Bagheri, Markus Thill, P. Koch, W. Konen","doi":"10.1109/TCIAIG.2014.2367105","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2367105","url":null,"abstract":"Learning board games by self-play has a long tradition in computational intelligence for games. Based on Tesauro's seminal success with TD-Gammon in 1994, many successful agents use temporal difference learning today. But in order to be successful with temporal difference learning on game tasks, often a careful selection of features and a large number of training games is necessary. Even for board games of moderate complexity like Connect-4, we found in previous work that a very rich initial feature set and several millions of game plays are required. In this work we investigate different approaches of online-adaptable learning rates like Incremental Delta Bar Delta (IDBD) or temporal coherence learning (TCL) whether they have the potential to speed up learning for such a complex task. We propose a new variant of TCL with geometric step size changes. We compare those algorithms with several other state-of-the-art learning rate adaptation algorithms and perform a case study on the sensitivity with respect to their meta parameters. We show that in this set of learning algorithms those with geometric step size changes outperform those other algorithms with constant step size changes. Algorithms with nonlinear output functions are slightly better than linear ones. Algorithms with geometric step size changes learn faster by a factor of 4 as compared to previously published results on the task Connect-4.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"33-42"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2367105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2346242
Spyridon Samothrakis, Diego Perez Liebana, S. Lucas, Philipp Rohlfshagen
Game competitions may involve different player roles and be score-based rather than win/loss based. This raises the issue of how best to draw opponents for matches in ongoing competitions, and how best to rank the players in each role. An example is the Ms Pac-Man versus Ghosts Competition which requires competitors to develop software controllers to take charge of the game's protagonists: participants may develop software controllers for either or both Ms Pac-Man and the team of four ghosts. In this paper, we compare two ranking schemes for win-loss games, Bayes Elo and Glicko. We convert the game into one of win-loss (“dominance”) by matching controllers of identical type against the same opponent in a series of pair-wise comparisons. This implicitly creates a “solution concept” as to what a constitutes a good player. We analyze how many games are needed under two popular ranking algorithms, Glicko and Bayes Elo, before one can infer the strength of the players, according to our proposed solution concept, without performing an exhaustive evaluation. We show that Glicko should be the method of choice for online score-based game competitions.
{"title":"Predicting Dominance Rankings for Score-Based Games","authors":"Spyridon Samothrakis, Diego Perez Liebana, S. Lucas, Philipp Rohlfshagen","doi":"10.1109/TCIAIG.2014.2346242","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2346242","url":null,"abstract":"Game competitions may involve different player roles and be score-based rather than win/loss based. This raises the issue of how best to draw opponents for matches in ongoing competitions, and how best to rank the players in each role. An example is the Ms Pac-Man versus Ghosts Competition which requires competitors to develop software controllers to take charge of the game's protagonists: participants may develop software controllers for either or both Ms Pac-Man and the team of four ghosts. In this paper, we compare two ranking schemes for win-loss games, Bayes Elo and Glicko. We convert the game into one of win-loss (“dominance”) by matching controllers of identical type against the same opponent in a series of pair-wise comparisons. This implicitly creates a “solution concept” as to what a constitutes a good player. We analyze how many games are needed under two popular ranking algorithms, Glicko and Bayes Elo, before one can infer the strength of the players, according to our proposed solution concept, without performing an exhaustive evaluation. We show that Glicko should be the method of choice for online score-based game competitions.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"1-12"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2346242","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-03-01DOI: 10.1109/TCIAIG.2014.2369345
Matthew S. Emigh, E. Kriminger, A. Brockmeier, J. Príncipe, P. Pardalos
Reinforcement learning (RL) has had mixed success when applied to games. Large state spaces and the curse of dimensionality have limited the ability for RL techniques to learn to play complex games in a reasonable length of time. We discuss a modification of Q-learning to use nearest neighbor states to exploit previous experience in the early stages of learning. A weighting on the state features is learned using metric learning techniques, such that neighboring states represent similar game situations. Our method is tested on the arcade game Frogger, and it is shown that some of the effects of the curse of dimensionality can be mitigated.
{"title":"Reinforcement Learning in Video Games Using Nearest Neighbor Interpolation and Metric Learning","authors":"Matthew S. Emigh, E. Kriminger, A. Brockmeier, J. Príncipe, P. Pardalos","doi":"10.1109/TCIAIG.2014.2369345","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2369345","url":null,"abstract":"Reinforcement learning (RL) has had mixed success when applied to games. Large state spaces and the curse of dimensionality have limited the ability for RL techniques to learn to play complex games in a reasonable length of time. We discuss a modification of Q-learning to use nearest neighbor states to exploit previous experience in the early stages of learning. A weighting on the state features is learned using metric learning techniques, such that neighboring states represent similar game situations. Our method is tested on the arcade game Frogger, and it is shown that some of the effects of the curse of dimensionality can be mitigated.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"56-66"},"PeriodicalIF":0.0,"publicationDate":"2016-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2369345","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-01-01DOI: 10.1109/TCIAIG.2015.2499281
M. Nogueira, C. Cotta, Antonio J. Fernández
The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms race between coevolving populations that belong to the same species (or at least to the same biotic niche), namely strategies, rules, tracks for racing, or any other. This paper proposes the co-evolution of entities belonging to different realms (namely biotic and abiotic) via a competitive approach. More precisely, we aim to coevolutionarily optimize both virtual players and game content. From a general perspective, our proposal can be viewed as a method of procedural content generation combined with a technique for generating game Artificial Intelligence (AI). This approach can not only help game designers in game creation but also generate content personalized to both specific players’ profiles and game designer’s objectives (e.g., create content that favors novice players over skillful players). As a case study we use Planet Wars, the Real Time Strategy (RTS) game associated with the 2010 Google AI Challenge contest, and demonstrate (via an empirical study) the validity of our approach.
{"title":"Competitive Algorithms for Coevolving Both Game Content and AI. A Case Study: Planet Wars","authors":"M. Nogueira, C. Cotta, Antonio J. Fernández","doi":"10.1109/TCIAIG.2015.2499281","DOIUrl":"https://doi.org/10.1109/TCIAIG.2015.2499281","url":null,"abstract":"The classical approach of Competitive Coevolution (CC) applied in games tries to exploit an arms race between coevolving populations that belong to the same species (or at least to the same biotic niche), namely strategies, rules, tracks for racing, or any other. This paper proposes the co-evolution of entities belonging to different realms (namely biotic and abiotic) via a competitive approach. More precisely, we aim to coevolutionarily optimize both virtual players and game content. From a general perspective, our proposal can be viewed as a method of procedural content generation combined with a technique for generating game Artificial Intelligence (AI). This approach can not only help game designers in game creation but also generate content personalized to both specific players’ profiles and game designer’s objectives (e.g., create content that favors novice players over skillful players). As a case study we use Planet Wars, the Real Time Strategy (RTS) game associated with the 2010 Google AI Challenge contest, and demonstrate (via an empirical study) the validity of our approach.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"8 1","pages":"325-337"},"PeriodicalIF":0.0,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2015.2499281","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62593230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-04DOI: 10.1109/TCIAIG.2014.2376982
C. Bauckhage, Anders Drachen, R. Sifa
Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.
{"title":"Clustering Game Behavior Data","authors":"C. Bauckhage, Anders Drachen, R. Sifa","doi":"10.1109/TCIAIG.2014.2376982","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2376982","url":null,"abstract":"Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"266-278"},"PeriodicalIF":0.0,"publicationDate":"2015-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2376982","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-01DOI: 10.1109/TCIAIG.2014.2342934
Alexander Zook, Mark O. Riedl
Digital games often center on a series of challenges designed to vary in difficulty over the course of the game. Designers, however, lack ways to ensure challenges are suitably tailored to the abilities of each game player, often resulting in player boredom or frustration. Challenge tailoring refers to the general problem of matching designer-intended challenges to player abilities. We present an approach to predict temporal player performance and select appropriate content to solve the challenge tailoring problem. Our temporal collaborative filtering approach-tensor factorization-captures similarities among players and the challenges they face to predict player performance on unseen, future challenges. Tensor factorization accounts for varying player abilities over time and is a generic approach capable of modeling many kinds of players. We use constraint solving to optimize content selection to match player skills to a designer-specified level of performance and present a model-performance curves-for designers to specify desired, temporally changing player behavior. We evaluate our approach in a role-playing game through two empirical studies of humans and one study using simulated agents. Our studies show tensor factorization scales in multiple game-relevant data dimensions, can be used for modestly effective game adaptation, and can predict divergent player learning trends.
{"title":"Temporal Game Challenge Tailoring","authors":"Alexander Zook, Mark O. Riedl","doi":"10.1109/TCIAIG.2014.2342934","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2342934","url":null,"abstract":"Digital games often center on a series of challenges designed to vary in difficulty over the course of the game. Designers, however, lack ways to ensure challenges are suitably tailored to the abilities of each game player, often resulting in player boredom or frustration. Challenge tailoring refers to the general problem of matching designer-intended challenges to player abilities. We present an approach to predict temporal player performance and select appropriate content to solve the challenge tailoring problem. Our temporal collaborative filtering approach-tensor factorization-captures similarities among players and the challenges they face to predict player performance on unseen, future challenges. Tensor factorization accounts for varying player abilities over time and is a generic approach capable of modeling many kinds of players. We use constraint solving to optimize content selection to match player skills to a designer-specified level of performance and present a model-performance curves-for designers to specify desired, temporally changing player behavior. We evaluate our approach in a role-playing game through two empirical studies of humans and one study using simulated agents. Our studies show tensor factorization scales in multiple game-relevant data dimensions, can be used for modestly effective game adaptation, and can predict divergent player learning trends.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"336-346"},"PeriodicalIF":0.0,"publicationDate":"2015-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2342934","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}