Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286453
David Keaveney, C. O'Riordan
This paper presents an analysis of evolved strategies for an abstract real-time strategy (RTS) game. The abstract RTS game used is a turn-based strategy game with properties such as parallel turns and imperfect spatial information. The automated player used to learn strategies uses a progressive refinement planning technique to plan its next immediate turn during the game. We describe two types of spatial tactical coordination which we posit are important in the game and define measures for both. A set of ten strategies evolved in a single environment are compared to a second set of ten strategies evolved across a set of environments. The robustness of all of evolved strategies are assessed when playing each other in each environment. Also, the levels of coordination present in both sets of strategies are measured and compared. We wish to show that evolving across multiple spatial environments is necessary to evolve robustness into our strategies.
{"title":"Evolving robust strategies for an abstract real-time strategy game","authors":"David Keaveney, C. O'Riordan","doi":"10.1109/CIG.2009.5286453","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286453","url":null,"abstract":"This paper presents an analysis of evolved strategies for an abstract real-time strategy (RTS) game. The abstract RTS game used is a turn-based strategy game with properties such as parallel turns and imperfect spatial information. The automated player used to learn strategies uses a progressive refinement planning technique to plan its next immediate turn during the game. We describe two types of spatial tactical coordination which we posit are important in the game and define measures for both. A set of ten strategies evolved in a single environment are compared to a second set of ten strategies evolved across a set of environments. The robustness of all of evolved strategies are assessed when playing each other in each environment. Also, the levels of coordination present in both sets of strategies are measured and compared. We wish to show that evolving across multiple spatial environments is necessary to evolve robustness into our strategies.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126669902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286489
S. Tanimoto, Tyler Robison, S. Fan
Collaborative design practices are evolving rapidly today as a result of improvements in telecommunications and human-computer interfaces. We present a suite of research tools that we have built in order to evaluate a particular methodology for design based on a theory of problem solving from the field of artificial intelligence. These tools are (a) a formal specification for a class of multimedia games, (b) a game-building tool called PRIME Designer, and (c) a game engine that brings games to life. The design of these tools addresses several issues: (1) support for a common language for the design process, deriving from state-space search, (2) visual interfaces for collaboration, (3) specifications for a class of games (called PRIME games) whose affordances represent a balance between simplicity and richness, (4) educating students to work in design teams that use advanced computational services, and (5) assessing the learning and contributions of each team member. We also report on a focus group study in which four undergraduate students used the tools. Our experience suggests that users without a computing background can learn how to employ state-space trees to organize the design process, and thereby gain facilities to coordinate their individual contributions to the design of a game.
{"title":"A game-building environment for research in collaborative design","authors":"S. Tanimoto, Tyler Robison, S. Fan","doi":"10.1109/CIG.2009.5286489","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286489","url":null,"abstract":"Collaborative design practices are evolving rapidly today as a result of improvements in telecommunications and human-computer interfaces. We present a suite of research tools that we have built in order to evaluate a particular methodology for design based on a theory of problem solving from the field of artificial intelligence. These tools are (a) a formal specification for a class of multimedia games, (b) a game-building tool called PRIME Designer, and (c) a game engine that brings games to life. The design of these tools addresses several issues: (1) support for a common language for the design process, deriving from state-space search, (2) visual interfaces for collaboration, (3) specifications for a class of games (called PRIME games) whose affordances represent a balance between simplicity and richness, (4) educating students to work in design teams that use advanced computational services, and (5) assessing the learning and contributions of each team member. We also report on a focus group study in which four undergraduate students used the tools. Our experience suggests that users without a computing background can learn how to employ state-space trees to organize the design process, and thereby gain facilities to coordinate their individual contributions to the design of a game.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122149193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286504
D. Loiacono, J. Togelius, P. Lanzi
The simulated car racing competition of CIG-2009 is the final event of the 2009 Simulated Car Racing Championship, an event joining the three competitions held at CEC-2009, GECCO-2009, and CIG-2009.
{"title":"Simulated car racing","authors":"D. Loiacono, J. Togelius, P. Lanzi","doi":"10.1109/CIG.2009.5286504","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286504","url":null,"abstract":"The simulated car racing competition of CIG-2009 is the final event of the 2009 Simulated Car Racing Championship, an event joining the three competitions held at CEC-2009, GECCO-2009, and CIG-2009.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133224366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286456
Thomas Thompson, J. Levine
Applying neural networks to generate robust agent controllers is now a seasoned practice, with time needed only to isolate particulars of domain and execution. However we are often constrained to local problems due to an agents inability to reason in an abstract manner. While there are suitable approaches for abstract reasoning and search, there is often the issues that arise in using offline processes in real-time situations. In this paper we explore the feasibility of creating a decentralised architecture that combines these approaches. The approach in this paper explores utilising a classical automated planner that interfaces with a library of neural network actuators through the use of a Prolog rule base. We explore the validity of solving a variety of goals with and without additional hostile entities as well as added uncertainty in the the world. The end results providing a goal-driven agent that adapts to situations and reacts accordingly.
{"title":"Realtime execution of automated plans using evolutionary robotics","authors":"Thomas Thompson, J. Levine","doi":"10.1109/CIG.2009.5286456","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286456","url":null,"abstract":"Applying neural networks to generate robust agent controllers is now a seasoned practice, with time needed only to isolate particulars of domain and execution. However we are often constrained to local problems due to an agents inability to reason in an abstract manner. While there are suitable approaches for abstract reasoning and search, there is often the issues that arise in using offline processes in real-time situations. In this paper we explore the feasibility of creating a decentralised architecture that combines these approaches. The approach in this paper explores utilising a classical automated planner that interfaces with a library of neural network actuators through the use of a Prolog rule base. We explore the validity of solving a variety of goals with and without additional hostile entities as well as added uncertainty in the the world. The end results providing a goal-driven agent that adapts to situations and reacts accordingly.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133320167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286460
J. Westra, F. Dignum
Designing and implementing the decisions of Non-Player Characters in first person shooter games becomes more difficult as the games get more complex. For every additional feature in a level potentially all decisions have to be revisited and another check made on this new feature. This leads to an explosion of the number of cases that have to be checked, which in its turn leads to situations where combinations of features are overlooked and Non-Player Characters act strange in those particular circumstances. In this paper we show how evolutionary neural networks can be used to avoid these problems and lead to good and robust behavior.
{"title":"Evolutionary neural networks for Non-Player Characters in Quake III","authors":"J. Westra, F. Dignum","doi":"10.1109/CIG.2009.5286460","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286460","url":null,"abstract":"Designing and implementing the decisions of Non-Player Characters in first person shooter games becomes more difficult as the games get more complex. For every additional feature in a level potentially all decisions have to be revisited and another check made on this new feature. This leads to an explosion of the number of cases that have to be checked, which in its turn leads to situations where combinations of features are overlooked and Non-Player Characters act strange in those particular circumstances. In this paper we show how evolutionary neural networks can be used to avoid these problems and lead to good and robust behavior.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116013004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286491
E. López, M. O’Neill
We present an analysis of an application of Evolutionary Computation to the Sudoku Puzzle. In particular, we are interested in understanding the locality of the search operators employed, and the difficulty of the problem landscape. Treating the Sudoku puzzle as a permutation problem we analyse the locality of four permutation-based crossover operators, named One Cycle Crossover, Multi-Cycle Crossover, Partially Matched Crossover (PMX) and Uniform Swap Crossover. These were analysed using different crossover rates. Experimental evidence is found to support the hypothesis that PMX and Uniform Swap Crossover operators have better properties of locality relative to the other operators examined regardless of the crossover rates used. Fitness distance correlation, a well-known measure of hardness, is used to analyse problem difficulty and the results are consistent with the difficulty levels associated with the benchmark Sudoku puzzles analysed.
{"title":"On the effects of locality in a permutation problem: The Sudoku Puzzle","authors":"E. López, M. O’Neill","doi":"10.1109/CIG.2009.5286491","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286491","url":null,"abstract":"We present an analysis of an application of Evolutionary Computation to the Sudoku Puzzle. In particular, we are interested in understanding the locality of the search operators employed, and the difficulty of the problem landscape. Treating the Sudoku puzzle as a permutation problem we analyse the locality of four permutation-based crossover operators, named One Cycle Crossover, Multi-Cycle Crossover, Partially Matched Crossover (PMX) and Uniform Swap Crossover. These were analysed using different crossover rates. Experimental evidence is found to support the hypothesis that PMX and Uniform Swap Crossover operators have better properties of locality relative to the other operators examined regardless of the crossover rates used. Fitness distance correlation, a well-known measure of hardness, is used to analyse problem difficulty and the results are consistent with the difficulty levels associated with the benchmark Sudoku puzzles analysed.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115601965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286474
M. Pace
In game theory, the Traveler's Dilemma (abbreviated TD) is a non-zero-sum 1 game in which two players attempt to maximize their own payoff without deliberately willing to damage the opponent. In the classical formulation of this problem, game theory predicts that, if both players are purely rational, they will always choose the strategy corresponding to the Nash equilibrium for the game. However, when played experimentally, most human players select much higher values (usually close to $100), deviating strongly from the Nash equilibrium and obtaining, on average, much higher rewards. In this paper we analyze the behaviour of a genetic algorithm that, by repeatedly playing the game, evolves the strategy in order to maximize the payoffs. In the algorithm, the population has no a priori knowledge about the game. The fitness function rewards the individuals who obtain high payoffs at the end of each game session. We demonstrate that, when it is possible to assign to each strategy a probability measure, then the search for good strategies can be effectively translated into a problem of search in a measure space using, for example, genetic algorithms. Furthermore, the codification of the genome as a probability distribution allows the analysis of common crossover and mutation operators in the uncommon case where the genome is a probability measure.
{"title":"How a genetic algorithm learns to play Traveler's Dilemma by choosing dominated strategies to achieve greater payoffs","authors":"M. Pace","doi":"10.1109/CIG.2009.5286474","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286474","url":null,"abstract":"In game theory, the Traveler's Dilemma (abbreviated TD) is a non-zero-sum 1 game in which two players attempt to maximize their own payoff without deliberately willing to damage the opponent. In the classical formulation of this problem, game theory predicts that, if both players are purely rational, they will always choose the strategy corresponding to the Nash equilibrium for the game. However, when played experimentally, most human players select much higher values (usually close to $100), deviating strongly from the Nash equilibrium and obtaining, on average, much higher rewards. In this paper we analyze the behaviour of a genetic algorithm that, by repeatedly playing the game, evolves the strategy in order to maximize the payoffs. In the algorithm, the population has no a priori knowledge about the game. The fitness function rewards the individuals who obtain high payoffs at the end of each game session. We demonstrate that, when it is possible to assign to each strategy a probability measure, then the search for good strategies can be effectively translated into a problem of search in a measure space using, for example, genetic algorithms. Furthermore, the codification of the genome as a probability distribution allows the analysis of common crossover and mutation operators in the uncommon case where the genome is a probability measure.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130726181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286463
N. V. Hoorn, J. Togelius, J. Schmidhuber
We describe the architecture of a hierarchical learning-based controller for bots in the First-Person Shooter (FPS) game Unreal Tournament 2004. The controller is inspired by the subsumption architecture commonly used in behaviourbased robotics. A behaviour selector decides which of three sub-controllers gets to control the bot at each time step. Each controller is implemented as a recurrent neural network, and trained with artificial evolution to perform respectively combat, exploration and path following. The behaviour selector is trained with a multiobjective evolutionary algorithm to achieve an effective balancing of the lower-level behaviours. We argue that FPS games provide good environments for studying the learning of complex behaviours, and that the methods proposed here can help developing interesting opponents for games.
{"title":"Hierarchical controller learning in a First-Person Shooter","authors":"N. V. Hoorn, J. Togelius, J. Schmidhuber","doi":"10.1109/CIG.2009.5286463","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286463","url":null,"abstract":"We describe the architecture of a hierarchical learning-based controller for bots in the First-Person Shooter (FPS) game Unreal Tournament 2004. The controller is inspired by the subsumption architecture commonly used in behaviourbased robotics. A behaviour selector decides which of three sub-controllers gets to control the bot at each time step. Each controller is implemented as a recurrent neural network, and trained with artificial evolution to perform respectively combat, exploration and path following. The behaviour selector is trained with a multiobjective evolutionary algorithm to achieve an effective balancing of the lower-level behaviours. We argue that FPS games provide good environments for studying the learning of complex behaviours, and that the methods proposed here can help developing interesting opponents for games.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"50 Suppl 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131000852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286484
J. Mańdziuk, K. Mossakowski
Artificial neural networks, trained only on sample bridge deals, without presentation of any human knowledge as well as the rules of the game, are applied to solving the Double Dummy Bridge Problem (DDBP). The problem, in its basic form, consist in estimation of the number of tricks to be taken by one pair of bridge players.
{"title":"Neural networks compete with expert human players in solving the Double Dummy Bridge Problem","authors":"J. Mańdziuk, K. Mossakowski","doi":"10.1109/CIG.2009.5286484","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286484","url":null,"abstract":"Artificial neural networks, trained only on sample bridge deals, without presentation of any human knowledge as well as the rules of the game, are applied to solving the Double Dummy Bridge Problem (DDBP). The problem, in its basic form, consist in estimation of the number of tricks to be taken by one pair of bridge players.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131039790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-09-07DOI: 10.1109/CIG.2009.5286501
C. D. Ward, P. Cowling
We present the card game Magic: The Gathering as an interesting test bed for AI research. We believe that the complexity of the game offers new challenges in areas such as search in imperfect information domains and opponent modelling. Since there are a thousands of possible cards, and many cards change the rules to some extent, to successfully build AI for Magic: The Gathering ultimately requires a rather general form of game intelligence (although we only consider a small subset of these cards in this paper). We create a range of players based on stochastic, rule-based and Monte Carlo approaches and investigate Monte Carlo search with and without the use of a sophisticated rule-based approach to generate game rollouts. We also examine the effect of increasing numbers of Monte Carlo simulations on playing strength and investigate whether Monte Carlo simulations can enable an otherwise weak player to overcome a stronger rule-based player. Overall, we show that Monte Carlo search is a promising avenue for generating a strong AI player for Magic: The Gathering.
{"title":"Monte Carlo search applied to card selection in Magic: The Gathering","authors":"C. D. Ward, P. Cowling","doi":"10.1109/CIG.2009.5286501","DOIUrl":"https://doi.org/10.1109/CIG.2009.5286501","url":null,"abstract":"We present the card game Magic: The Gathering as an interesting test bed for AI research. We believe that the complexity of the game offers new challenges in areas such as search in imperfect information domains and opponent modelling. Since there are a thousands of possible cards, and many cards change the rules to some extent, to successfully build AI for Magic: The Gathering ultimately requires a rather general form of game intelligence (although we only consider a small subset of these cards in this paper). We create a range of players based on stochastic, rule-based and Monte Carlo approaches and investigate Monte Carlo search with and without the use of a sophisticated rule-based approach to generate game rollouts. We also examine the effect of increasing numbers of Monte Carlo simulations on playing strength and investigate whether Monte Carlo simulations can enable an otherwise weak player to overcome a stronger rule-based player. Overall, we show that Monte Carlo search is a promising avenue for generating a strong AI player for Magic: The Gathering.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126931823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}