Particle systems are a representation, computation, and rendering method for special effects such as fire, smoke, explosions, electricity, water, magic, and many other phenomena. This paper presents NEAT particles, a new design, representation, and animation method for particle systems tailored to real-time effects in video games and simulations. In NEAT particles, the neuroevolution of augmenting topologies (NEAT) method evolves artificial neural networks (ANN) that control the appearance and motion of particles. NEAT particles affords three primary advantages over traditional particle effect development methods. First, it decouples the creation of new particle effects from mathematics and programming, enabling users with little knowledge of either to produce complex effects. Second, it allows content designers to evolve a broader range of effects than typical development tools through a form of interactive evolutionary computation (IEC). And finally, it acts as a concept generator, allowing users to interactively explore the space of possible effects. In the future such a system may allow content to be evolved in the game itself, as it is played
{"title":"NEAT Particles: Design, Representation, and Animation of Particle System Effects","authors":"E. Hastings, R. Guha, Kenneth O. Stanley","doi":"10.1109/CIG.2007.368092","DOIUrl":"https://doi.org/10.1109/CIG.2007.368092","url":null,"abstract":"Particle systems are a representation, computation, and rendering method for special effects such as fire, smoke, explosions, electricity, water, magic, and many other phenomena. This paper presents NEAT particles, a new design, representation, and animation method for particle systems tailored to real-time effects in video games and simulations. In NEAT particles, the neuroevolution of augmenting topologies (NEAT) method evolves artificial neural networks (ANN) that control the appearance and motion of particles. NEAT particles affords three primary advantages over traditional particle effect development methods. First, it decouples the creation of new particle effects from mathematics and programming, enabling users with little knowledge of either to produce complex effects. Second, it allows content designers to evolve a broader range of effects than typical development tools through a form of interactive evolutionary computation (IEC). And finally, it acts as a concept generator, allowing users to interactively explore the space of possible effects. In the future such a system may allow content to be evolved in the game itself, as it is played","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125602449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evolutionary algorithms are commonly used to create high-performing strategies or agents for computer games. In this paper, we instead choose to evolve the racing tracks in a car racing game. An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player. This requires a way to create accurate models of players' driving styles, as well as a tentative definition of when a racing track is fun, both of which are provided. We believe this approach opens up interesting new research questions and is potentially applicable to commercial racing games.
{"title":"Towards automatic personalised content creation for racing games","authors":"J. Togelius, R. D. Nardi, S. Lucas","doi":"10.1109/CIG.2007.368106","DOIUrl":"https://doi.org/10.1109/CIG.2007.368106","url":null,"abstract":"Evolutionary algorithms are commonly used to create high-performing strategies or agents for computer games. In this paper, we instead choose to evolve the racing tracks in a car racing game. An evolvable track representation is devised, and a multiobjective evolutionary algorithm maximises the entertainment value of the track relative to a particular human player. This requires a way to create accurate models of players' driving styles, as well as a tentative definition of when a racing track is fun, both of which are provided. We believe this approach opens up interesting new research questions and is potentially applicable to commercial racing games.","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127178324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of memory in coevolutionary systems is considered an important mechanism to counter the Red Queen effect. Our research involves incorporating a memory population that the coevolving populations compete against to obtain a fitness that is influenced by past generations. This long term fitness then allows the population to have continuous learning that awards individuals that do well against the current populations, as well as previous winning individuals. By allowing continued learning, the individuals in the populations increase their overall ability to play the game of TEMPO, not just to play a single round with the current opposition.
{"title":"A Historical Population in a Coevolutionary System","authors":"P. Avery, Z. Michalewicz, Martin Schmidt","doi":"10.1109/CIG.2007.368085","DOIUrl":"https://doi.org/10.1109/CIG.2007.368085","url":null,"abstract":"The use of memory in coevolutionary systems is considered an important mechanism to counter the Red Queen effect. Our research involves incorporating a memory population that the coevolving populations compete against to obtain a fitness that is influenced by past generations. This long term fitness then allows the population to have continuous learning that awards individuals that do well against the current populations, as well as previous winning individuals. By allowing continued learning, the individuals in the populations increase their overall ability to play the game of TEMPO, not just to play a single round with the current opposition.","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131139632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Algorithm UCB1 for multi-armed bandit problem has already been extended to algorithm UCT which works for minimax tree search. We have developed a Monte-Carlo program, MoGo, which is the first computer Go program using UCT. We explain our modification of UCT for Go application and also the sequence-like random simulation with patterns which has improved significantly the performance of MoGo. UCT combined with pruning techniques for large Go board is discussed, as well as parallelization of UCT. MoGo is now a top-level computer-Go program on 9 times 9 Go board
{"title":"Modifications of UCT and sequence-like simulations for Monte-Carlo Go","authors":"Yizao Wang, S. Gelly","doi":"10.1109/CIG.2007.368095","DOIUrl":"https://doi.org/10.1109/CIG.2007.368095","url":null,"abstract":"Algorithm UCB1 for multi-armed bandit problem has already been extended to algorithm UCT which works for minimax tree search. We have developed a Monte-Carlo program, MoGo, which is the first computer Go program using UCT. We explain our modification of UCT for Go application and also the sequence-like random simulation with patterns which has improved significantly the performance of MoGo. UCT combined with pruning techniques for large Go board is discussed, as well as parallelization of UCT. MoGo is now a top-level computer-Go program on 9 times 9 Go board","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115067182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes the Snooker Machine, an intelligent robotic system that was built between late 1985 and early 1988. The project was documented by the BBC over the course of 2 years, "The Snooker Machine" was broadcasted on BBCs territorial channel in the UK on the one hour Q.E.D, science programme of 16th March 1988. This paper summarizes the technical details of the system. It consisted of a vision system, a fuzzy expert system and a robot manipulator. It outlines some of the difficulties that the Snooker Machine had to overcome in playing a game of snooker against a human player. Given the recent interests in developing robotic systems to play pool (Leekie and Greenspan, 2005), (Greenspan, 2006), and (Ghan et al., 2002), this paper looks back at some of these issues. It also outlines some computational intelligence approaches that may lead to solving some of the problems using today's technology
本文描述了斯诺克机器,这是一个智能机器人系统,建于1985年底至1988年初。这个项目被英国广播公司用了两年的时间记录下来。1988年3月16日,《斯诺克机器》在英国广播公司领土频道的一小时Q.E.D科学节目中播出。本文总结了该系统的技术细节。该系统由视觉系统、模糊专家系统和机械手组成。它概述了斯诺克机器在与人类选手进行斯诺克比赛时必须克服的一些困难。鉴于最近对开发机器人系统打台球的兴趣(Leekie和Greenspan, 2005), (Greenspan, 2006)和(Ghan et al., 2002),本文回顾了其中的一些问题。它还概述了一些计算智能方法,这些方法可能导致使用当今技术解决一些问题
{"title":"Snooker Robot Player - 20 Years on","authors":"Kenneth H. L. Ho, T. Martin, J. Baldwin","doi":"10.1109/CIG.2007.368072","DOIUrl":"https://doi.org/10.1109/CIG.2007.368072","url":null,"abstract":"This paper describes the Snooker Machine, an intelligent robotic system that was built between late 1985 and early 1988. The project was documented by the BBC over the course of 2 years, \"The Snooker Machine\" was broadcasted on BBCs territorial channel in the UK on the one hour Q.E.D, science programme of 16th March 1988. This paper summarizes the technical details of the system. It consisted of a vision system, a fuzzy expert system and a robot manipulator. It outlines some of the difficulties that the Snooker Machine had to overcome in playing a game of snooker against a human player. Given the recent interests in developing robotic systems to play pool (Leekie and Greenspan, 2005), (Greenspan, 2006), and (Ghan et al., 2002), this paper looks back at some of these issues. It also outlines some computational intelligence approaches that may lead to solving some of the problems using today's technology","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"508 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134623976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Like robot soccer, robot hockey is a game played between two teams of robots. A robot hockey simulator has been created, for the purpose of game strategy testing and result visualization. One major modification in robot hockey is the addition of a puck-shooting mechanism to each robot. As a result, the mechanics of interaction between the robots and the hockey puck become a key design issue. This paper describes the simulator design considerations for robotic hockey games. A potential field-based strategy planner is implemented which is used to develop strategies for moving the robots autonomously. The results of the simulation study show both successful cooperation between robots (on the strategy level), and realistic interaction between robots and the puck
{"title":"Micro Robot Hockey Simulator - Game Engine Design","authors":"Wayne Y. Chen, S. Payandeh","doi":"10.1109/CIG.2007.368073","DOIUrl":"https://doi.org/10.1109/CIG.2007.368073","url":null,"abstract":"Like robot soccer, robot hockey is a game played between two teams of robots. A robot hockey simulator has been created, for the purpose of game strategy testing and result visualization. One major modification in robot hockey is the addition of a puck-shooting mechanism to each robot. As a result, the mechanics of interaction between the robots and the hockey puck become a key design issue. This paper describes the simulator design considerations for robotic hockey games. A potential field-based strategy planner is implemented which is used to develop strategies for moving the robots autonomously. The results of the simulation study show both successful cooperation between robots (on the strategy level), and realistic interaction between robots and the puck","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129978839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A combinatorial graph can be used to place a geography on a population of evolving agents. In this paper agents are trained to play prisoner's dilemma while situated on combinatorial graphs. A collection of thirteen different combinatorial graphs is used. The graph always limits which agents can mate during reproduction. Two sets of experiments are performed for each graph: one in which agents only play prisoners dilemma against their neighbors and one in which fitness is evaluated by a round robin tournament among all population members. Populations are evaluated on their level of cooperativeness, the type of play they engage in, and by identifying the type and diversity of strategies that are present. This latter analysis relies on the fingerprinting of players, a representation-independent method of identifying strategies. Changing the combinatorial graph on which a population lives is found to yield statistically significant changes in the character of the evolved populations for all the metrics used
{"title":"Cooperation in Prisoner's Dilemma on Graphs","authors":"D. Ashlock","doi":"10.1109/CIG.2007.368078","DOIUrl":"https://doi.org/10.1109/CIG.2007.368078","url":null,"abstract":"A combinatorial graph can be used to place a geography on a population of evolving agents. In this paper agents are trained to play prisoner's dilemma while situated on combinatorial graphs. A collection of thirteen different combinatorial graphs is used. The graph always limits which agents can mate during reproduction. Two sets of experiments are performed for each graph: one in which agents only play prisoners dilemma against their neighbors and one in which fitness is evaluated by a round robin tournament among all population members. Populations are evaluated on their level of cooperativeness, the type of play they engage in, and by identifying the type and diversity of strategies that are present. This latter analysis relies on the fingerprinting of players, a representation-independent method of identifying strategies. Changing the combinatorial graph on which a population lives is found to yield statistically significant changes in the character of the evolved populations for all the metrics used","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121557991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The notion of constructing a metric of the degree to which a player enjoys a given game has been presented previously. In this paper, we attempt to construct such metric models of children's 'fun' when playing the Bug Smasher game on the Playware platform. First, a set of numerical features derived from a child's interaction with the Playware hardware is presented. Then the sequential forward selection and the n-best feature selection algorithms are employed together with a function approximator based on an artificial neural network to construct feature sets and function that model the child's notion of 'fun' for this game. Performance of the model is evaluated by the degree to which the preferences predicted by the model match those expressed by the children in a survey experiment. The results show that an effective model can be constructed using these techniques and that the sequential forward selection method performs better in this task than n-best. The model reveals differing preferences for game parameters between children who react fast to game events and those who react slowly. The limitations and the use of the methodology as an effective adaptive mechanism to entertainment augmentation are discussed
{"title":"Game and Player Feature Selection for Entertainment Capture","authors":"Georgios N. Yannakakis, J. Hallam","doi":"10.1109/CIG.2007.368105","DOIUrl":"https://doi.org/10.1109/CIG.2007.368105","url":null,"abstract":"The notion of constructing a metric of the degree to which a player enjoys a given game has been presented previously. In this paper, we attempt to construct such metric models of children's 'fun' when playing the Bug Smasher game on the Playware platform. First, a set of numerical features derived from a child's interaction with the Playware hardware is presented. Then the sequential forward selection and the n-best feature selection algorithms are employed together with a function approximator based on an artificial neural network to construct feature sets and function that model the child's notion of 'fun' for this game. Performance of the model is evaluated by the degree to which the preferences predicted by the model match those expressed by the children in a survey experiment. The results show that an effective model can be constructed using these techniques and that the sequential forward selection method performs better in this task than n-best. The model reveals differing preferences for game parameters between children who react fast to game events and those who react slowly. The limitations and the use of the methodology as an effective adaptive mechanism to entertainment augmentation are discussed","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116958904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the performance and adaptability of evolutionary, learning and memetic strategies to different environment settings in the iterated prisoner's dilemma (IPD). A memetic adaptation framework is devised for IPD strategies to exploit the complementary features of evolution and learning. In the paradigm, learning serves as a form of directed search to guide evolutionary strategies to attain good strategy traits while evolution helps to minimize disparity in performance between learning strategies. A cognitive double-loop incremental learning scheme (ILS) that encompasses a perception component, probabilistic revision of strategies and a feedback learning mechanism is also proposed and incorporated into evolution. Simulation results verify that the two techniques, when employed together, are able to complement each other's strengths and compensate each other's weaknesses, leading to the formation of good strategies that adapt and thrive well in complex, dynamic environments
{"title":"Adaptation of Iterated Prisoner's Dilemma Strategies by Evolution and Learning","authors":"H. Quek, C. Goh","doi":"10.1109/CIG.2007.368077","DOIUrl":"https://doi.org/10.1109/CIG.2007.368077","url":null,"abstract":"This paper examines the performance and adaptability of evolutionary, learning and memetic strategies to different environment settings in the iterated prisoner's dilemma (IPD). A memetic adaptation framework is devised for IPD strategies to exploit the complementary features of evolution and learning. In the paradigm, learning serves as a form of directed search to guide evolutionary strategies to attain good strategy traits while evolution helps to minimize disparity in performance between learning strategies. A cognitive double-loop incremental learning scheme (ILS) that encompasses a perception component, probabilistic revision of strategies and a feedback learning mechanism is also proposed and incorporated into evolution. Simulation results verify that the two techniques, when employed together, are able to complement each other's strengths and compensate each other's weaknesses, leading to the formation of good strategies that adapt and thrive well in complex, dynamic environments","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123500637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the effect of varying the depth of look-ahead for heuristic search in temporal difference (TD) learning and game playing. The acquisition position evaluation functions for the game of Othello is studied. The paper provides important insights into the strengths and weaknesses of using different search depths during learning when epsi-greedy exploration is applied. The main findings are that contrary to popular belief, for Othello, better playing strategies are found when TD learning is applied with lower look-ahead search depths
{"title":"Effect of look-ahead search depth in learning position evaluation functions for Othello using -greedy exploration","authors":"T. Runarsson, Egill Orn Jonsson","doi":"10.1109/CIG.2007.368100","DOIUrl":"https://doi.org/10.1109/CIG.2007.368100","url":null,"abstract":"This paper studies the effect of varying the depth of look-ahead for heuristic search in temporal difference (TD) learning and game playing. The acquisition position evaluation functions for the game of Othello is studied. The paper provides important insights into the strengths and weaknesses of using different search depths during learning when epsi-greedy exploration is applied. The main findings are that contrary to popular belief, for Othello, better playing strategies are found when TD learning is applied with lower look-ahead search depths","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129894288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}