Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286500
Anders Drachen, Alessandro Canossa, Georgios N. Yannakakis
We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.
{"title":"Player modeling using self-organization in Tomb Raider: Underworld","authors":"Anders Drachen, Alessandro Canossa, Georgios N. Yannakakis","doi":"10.1109/CIG.2009.5286500","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286500","url":null,"abstract":"We present a study focused on constructing models of players for the major commercial title Tomb Raider: Underworld (TRU). Emergent self-organizing maps are trained on high-level playing behavior data obtained from 1365 players that completed the TRU game. The unsupervised learning approach utilized reveals four types of players which are analyzed within the context of the game. The proposed approach automates, in part, the traditional user and play testing procedures followed in the game industry since it can inform game developers, in detail, if the players play the game as intended by the game design. Subsequently, player models can assist the tailoring of game mechanics in real-time for the needs of the player type identified.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115358140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286457
P. Avery, S. Louis, B. Avery
We evolve tactical control for entity groups in a naval real-time strategy game. Since tactical maneuvering involves spatial reasoning, our evolutionary algorithm evolves a set of influence maps that help specify an entity's spatial objectives. The entity then uses the A* route finding algorithm to generate waypoints according to the influence map, and follows them to achieve spatial objectives. Using this representation, our evolutionary algorithm quickly evolves increasingly better capture-the-flag tactics on three increasingly difficult maps. These preliminary results indicate (1) the usefulness of our particular influence map encoding for representing spatially resolved tactics and (2) the potential for using co-evolution to generate increasingly complex and competent tactics in our game. More generally, this work represents another step in our ongoing effort to investigate the co-evolution of competent game players in a real-time, continuous, environment that does not assume complete knowledge of the game state.
{"title":"Evolving coordinated spatial tactics for autonomous entities using influence maps","authors":"P. Avery, S. Louis, B. Avery","doi":"10.1109/CIG.2009.5286457","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286457","url":null,"abstract":"We evolve tactical control for entity groups in a naval real-time strategy game. Since tactical maneuvering involves spatial reasoning, our evolutionary algorithm evolves a set of influence maps that help specify an entity's spatial objectives. The entity then uses the A* route finding algorithm to generate waypoints according to the influence map, and follows them to achieve spatial objectives. Using this representation, our evolutionary algorithm quickly evolves increasingly better capture-the-flag tactics on three increasingly difficult maps. These preliminary results indicate (1) the usefulness of our particular influence map encoding for representing spatially resolved tactics and (2) the potential for using co-evolution to generate increasingly complex and competent tactics in our game. More generally, this work represents another step in our ongoing effort to investigate the co-evolution of competent game players in a real-time, continuous, environment that does not assume complete knowledge of the game state.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114697052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286473
R. Moreno, Agapito Ledezma, A. Sanchis
The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.
{"title":"Towards conscious-like behavior in computer game characters","authors":"R. Moreno, Agapito Ledezma, A. Sanchis","doi":"10.1109/CIG.2009.5286473","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286473","url":null,"abstract":"The main sources of inspiration for the design of more engaging synthetic characters are existing psychological models of human cognition. Usually, these models, and the associated Artificial Intelligence (AI) techniques, are based on partial aspects of the real complex systems involved in the generation of human-like behavior. Emotions, planning, learning, user modeling, set shifting, and attention mechanisms are some remarkable examples of features typically considered in isolation within classical AI control models. Artificial cognitive architectures aim at integrating many of these aspects together into effective control systems. However, the design of this sort of architectures is not straightforward. In this paper, we argue that current research efforts in the young field of Machine Consciousness (MC) could contribute to tackle complexity and provide a useful framework for the design of more appealing synthetic characters. This hypothesis is illustrated with the application of a novel consciousness-based cognitive architecture to the development of a First Person Shooter video game character.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124980905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286461
Luca Galli, D. Loiacono, P. Lanzi
Modern computer games are becoming increasingly complex and only experienced players can fully master the game controls. Accordingly, many commercial games now provide aids to simplify the player interaction. These aids are based on simple heuristics rules and cannot adapt neither to the current game situation nor to the player game style. In this paper, we suggest that supervised methods can be applied effectively to improve the quality of such game aids. In particular, we focus on the problem of developing an automatic weapon selection aid for Unreal Tournament III, a recent and very popular first person shooter (FPS). We propose a framework to (i) collect a dataset from game sessions, (ii) learn a policy to automatically select the weapon, and (iii) deploy the learned models in the game to replace the default weaponswitching aid provided in the game distribution. Our approach allows the development of weapon-switching policies that are aware of the current game context and can also imitate a particular game style.
{"title":"Learning a context-aware weapon selection policy for Unreal Tournament III","authors":"Luca Galli, D. Loiacono, P. Lanzi","doi":"10.1109/CIG.2009.5286461","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286461","url":null,"abstract":"Modern computer games are becoming increasingly complex and only experienced players can fully master the game controls. Accordingly, many commercial games now provide aids to simplify the player interaction. These aids are based on simple heuristics rules and cannot adapt neither to the current game situation nor to the player game style. In this paper, we suggest that supervised methods can be applied effectively to improve the quality of such game aids. In particular, we focus on the problem of developing an automatic weapon selection aid for Unreal Tournament III, a recent and very popular first person shooter (FPS). We propose a framework to (i) collect a dataset from game sessions, (ii) learn a policy to automatically select the weapon, and (iii) deploy the learned models in the game to replace the default weaponswitching aid provided in the game distribution. Our approach allows the development of weapon-switching policies that are aware of the current game context and can also imitate a particular game style.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115035788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286451
Amine M. Boumaza
In the paper, we focus the use of evolutionary algorithms to learn strategies to play the game of Tetris. We describe the problem and discuss the nature of the search space. We present experiments to illustrate the learning process of our artificial player, and provide a new procedure to speed up the learning time. The results we present compare with the best known artificial player, and show how our evolutionary algorithm is able to rediscover player strategies previously published. Finally we provide some ideas to improve the performance of artificial Tetris players.
{"title":"On the evolution of artificial Tetris players","authors":"Amine M. Boumaza","doi":"10.1109/CIG.2009.5286451","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286451","url":null,"abstract":"In the paper, we focus the use of evolutionary algorithms to learn strategies to play the game of Tetris. We describe the problem and discuss the nature of the search space. We present experiments to illustrate the learning process of our artificial player, and provide a new procedure to speed up the learning time. The results we present compare with the best known artificial player, and show how our evolutionary algorithm is able to rediscover player strategies previously published. Finally we provide some ideas to improve the performance of artificial Tetris players.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286459
Jacob Schrum, R. Miikkulainen
Evolution is often successful in generating complex behaviors, but evolving agents that exhibit distinctly different modes of behavior under different circumstances (multi-modal behavior) is both difficult and time consuming. This paper presents a method for encouraging the evolution of multi-modal behavior in agents controlled by artificial neural networks: A network mutation is introduced that adds enough output nodes to the network to create a new output mode. Each output mode completely defines the behavior of the network, but only one mode is chosen at any one time, based on the output values of preference nodes. With such structure, networks are able to produce appropriate outputs for several modes of behavior simultaneously, and arbitrate between them using preference nodes. This mutation makes it easier to discover interesting multi-modal behaviors in the course of neuroevolution.
{"title":"Evolving multi-modal behavior in NPCs","authors":"Jacob Schrum, R. Miikkulainen","doi":"10.1109/CIG.2009.5286459","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286459","url":null,"abstract":"Evolution is often successful in generating complex behaviors, but evolving agents that exhibit distinctly different modes of behavior under different circumstances (multi-modal behavior) is both difficult and time consuming. This paper presents a method for encouraging the evolution of multi-modal behavior in agents controlled by artificial neural networks: A network mutation is introduced that adds enough output nodes to the network to create a new output mode. Each output mode completely defines the behavior of the network, but only one mode is chosen at any one time, based on the output values of preference nodes. With such structure, networks are able to produce appropriate outputs for several modes of behavior simultaneously, and arbitrate between them using preference nodes. This mutation makes it easier to discover interesting multi-modal behaviors in the course of neuroevolution.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127281241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286498
P. Hingston
The Iterated Prisoner's Dilemma (IPD) is widely used to study the evolution of cooperation between self-interested agents. Existing work asks how genes that code for cooperation arise and spread through a single-species population of IPD playing agents. In this paper, we focus on competition between different species of agents. Making this distinction allows us to separate and examine macroevolutionary phenomena. We illustrate with some species-level simulation experiments with agents that use well-known strategies, and with species of agents that use team strategies.
{"title":"Iterated Prisoner's Dilemma for species","authors":"P. Hingston","doi":"10.1109/CIG.2009.5286498","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286498","url":null,"abstract":"The Iterated Prisoner's Dilemma (IPD) is widely used to study the evolution of cooperation between self-interested agents. Existing work asks how genes that code for cooperation arise and spread through a single-species population of IPD playing agents. In this paper, we focus on competition between different species of agents. Making this distinction allows us to separate and examine macroevolutionary phenomena. We illustrate with some species-level simulation experiments with agents that use well-known strategies, and with species of agents that use team strategies.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126701253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286512
S. Lecchi
A key aspect in the development of computer games is the behavior of non-player characters. Each type of game poses different challenges for the development of a successful artificial intelligence. In racing games, this translates into the programming of an AI which can adapt to the driving style and to the driving capabilities of the human player so as to improve its gaming experience. In addition, in racing games, the behavior of non-player characters should be plausible, challenging throughout the game, adaptive, and it should also lead to realistic group behaviors.
{"title":"Artificial intelligence in racing games","authors":"S. Lecchi","doi":"10.1109/CIG.2009.5286512","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286512","url":null,"abstract":"A key aspect in the development of computer games is the behavior of non-player characters. Each type of game poses different challenges for the development of a successful artificial intelligence. In racing games, this translates into the programming of an AI which can adapt to the driving style and to the driving capabilities of the human player so as to improve its gaming experience. In addition, in racing games, the behavior of non-player characters should be plausible, challenging throughout the game, adaptive, and it should also lead to realistic group behaviors.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129221450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286482
Chris Pedersen, J. Togelius, Georgios N. Yannakakis
This paper investigates the relationship between level design parameters of platform games, individual playing characteristics and player experience. The investigated design parameters relate to the placement and sizes of gaps in the level and the existence of direction changes; components of player experience include fun, frustration and challenge. A neural network model that maps between level design parameters, playing behavior characteristics and player reported emotions is trained using evolutionary preference learning and data from 480 platform game sessions. Results show that challenge and frustration can be predicted with a high accuracy (77.77% and 88.66% respectively) via a simple single-neuron model whereas model accuracy for fun (69.18%) suggests the use of more complex non-linear approximators for this emotion. The paper concludes with a discussion on how the obtained models can be utilized to automatically generate game levels which will enhance player experience.
{"title":"Modeling player experience in Super Mario Bros","authors":"Chris Pedersen, J. Togelius, Georgios N. Yannakakis","doi":"10.1109/CIG.2009.5286482","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286482","url":null,"abstract":"This paper investigates the relationship between level design parameters of platform games, individual playing characteristics and player experience. The investigated design parameters relate to the placement and sizes of gaps in the level and the existence of direction changes; components of player experience include fun, frustration and challenge. A neural network model that maps between level design parameters, playing behavior characteristics and player reported emotions is trained using evolutionary preference learning and data from 480 platform game sessions. Results show that challenge and frustration can be predicted with a high accuracy (77.77% and 88.66% respectively) via a simple single-neuron model whereas model accuracy for fun (69.18%) suggests the use of more complex non-linear approximators for this emotion. The paper concludes with a discussion on how the obtained models can be utilized to automatically generate game levels which will enhance player experience.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132559987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286494
Johan Hagelbäck, S. Johansson
Do players find it more enjoyable to win, than to play even matches? We have made a study of what a number of players expressed after playing against computer opponents of different kinds in an RTS game. There were two static computer opponents, one that was easily beaten, and one that was hard to beat, and three dynamic ones that adapted their strength to that of the player. One of these three latter ones intentionally drops its performance in the end of the game to make it easy for the player to win. Our results indicate that the players found it more enjoyable to play an even game against an opponent that adapts to the performance of the player, than playing against an opponent with static difficulty. The results also show that when the computer player that dropped its performance to let the player win was the least enjoyable opponent of them all.
{"title":"Measuring player experience on runtime dynamic difficulty scaling in an RTS game","authors":"Johan Hagelbäck, S. Johansson","doi":"10.1109/CIG.2009.5286494","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286494","url":null,"abstract":"Do players find it more enjoyable to win, than to play even matches? We have made a study of what a number of players expressed after playing against computer opponents of different kinds in an RTS game. There were two static computer opponents, one that was easily beaten, and one that was hard to beat, and three dynamic ones that adapted their strength to that of the player. One of these three latter ones intentionally drops its performance in the end of the game to make it easy for the player to win. Our results indicate that the players found it more enjoyable to play an even game against an opponent that adapts to the performance of the player, than playing against an opponent with static difficulty. The results also show that when the computer player that dropped its performance to let the player win was the least enjoyable opponent of them all.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131134422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}