首页 > 最新文献

2007 IEEE Symposium on Computational Intelligence and Games最新文献

英文 中文
A Multi-Agent Architecture for Game Playing 一种用于游戏的多代理体系结构
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368109
Ziad Kobti, Shiven Sharma
General game playing, a relatively new field in game research, presents new frontiers in building intelligent game players. The traditional premise for building a good artificially intelligent player is that the game is known to the player and pre-programmed to play accordingly. General game players challenge game programmers by not identifying the game until the beginning of game play. In this paper we explore a new approach to intelligent general game playing employing a self-organizing, multiple-agent evolutionary learning strategy. In order to decide on an intelligent move, specialized agents interact with each other and evolve competitive solutions to decide on the best move, sharing the learnt experience and using it to train themselves in a social environment. In an experimental setup using a simple board game, the evolutionary agents employing a learning strategy by training themselves from their own experiences, and without prior knowledge of the game, demonstrate to be as effective as other strong dedicated heuristics. This approach provides a potential for new intelligent game playing program design in the absence of prior knowledge of the game at hand
一般博弈是博弈研究中一个相对较新的领域,它为构建智能博弈玩家提供了新的前沿。创造优秀人工智能玩家的传统前提是,玩家知道游戏,并预先设定游戏玩法。一般游戏玩家在游戏开始前都不知道游戏是什么,这是对游戏程序员的挑战。在本文中,我们探索了一种采用自组织、多智能体进化学习策略的智能一般博弈的新方法。为了决定一个智能的行动,专门的智能体相互作用,进化出竞争性的解决方案,以决定最佳的行动,分享学到的经验,并在社会环境中使用它来训练自己。在一个使用简单棋盘游戏的实验设置中,进化代理采用一种学习策略,通过自己的经验来训练自己,而不需要事先了解游戏,这与其他强大的专用启发式方法一样有效。这种方法为在缺乏游戏先验知识的情况下设计新的智能游戏程序提供了潜力
{"title":"A Multi-Agent Architecture for Game Playing","authors":"Ziad Kobti, Shiven Sharma","doi":"10.1109/CIG.2007.368109","DOIUrl":"https://doi.org/10.1109/CIG.2007.368109","url":null,"abstract":"General game playing, a relatively new field in game research, presents new frontiers in building intelligent game players. The traditional premise for building a good artificially intelligent player is that the game is known to the player and pre-programmed to play accordingly. General game players challenge game programmers by not identifying the game until the beginning of game play. In this paper we explore a new approach to intelligent general game playing employing a self-organizing, multiple-agent evolutionary learning strategy. In order to decide on an intelligent move, specialized agents interact with each other and evolve competitive solutions to decide on the best move, sharing the learnt experience and using it to train themselves in a social environment. In an experimental setup using a simple board game, the evolutionary agents employing a learning strategy by training themselves from their own experiences, and without prior knowledge of the game, demonstrate to be as effective as other strong dedicated heuristics. This approach provides a potential for new intelligent game playing program design in the absence of prior knowledge of the game at hand","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125151264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Adversarial Planning Through Strategy Simulation 通过战略模拟的对抗性规划
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368082
Franisek Sailer, M. Buro, Marc Lanctot
Adversarial planning in highly complex decision domains, such as modern video games, has not yet received much attention from AI researchers. In this paper, we present a planning framework that uses strategy simulation in conjunction with Nash-equilibrium strategy approximation. We apply this framework to an army deployment problem in a real-time strategy game setting and present experimental results that indicate a performance gain over the scripted strategies that the system is built on. This technique provides an automated way of increasing the decision quality of scripted AI systems and is therefore ideally suited for video games and combat simulators
在高度复杂的决策领域(如现代电子游戏)中,对抗性规划尚未受到人工智能研究人员的太多关注。在本文中,我们提出了一个规划框架,该框架将策略模拟与纳什均衡策略近似相结合。我们将此框架应用于实时战略游戏设置中的军队部署问题,并给出实验结果,表明该系统所基于的脚本策略的性能优于脚本策略。这种技术提供了一种提高脚本AI系统决策质量的自动方法,因此非常适合电子游戏和战斗模拟器
{"title":"Adversarial Planning Through Strategy Simulation","authors":"Franisek Sailer, M. Buro, Marc Lanctot","doi":"10.1109/CIG.2007.368082","DOIUrl":"https://doi.org/10.1109/CIG.2007.368082","url":null,"abstract":"Adversarial planning in highly complex decision domains, such as modern video games, has not yet received much attention from AI researchers. In this paper, we present a planning framework that uses strategy simulation in conjunction with Nash-equilibrium strategy approximation. We apply this framework to an army deployment problem in a real-time strategy game setting and present experimental results that indicate a performance gain over the scripted strategies that the system is built on. This technique provides an automated way of increasing the decision quality of scripted AI systems and is therefore ideally suited for video games and combat simulators","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132280452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe 三字棋认知神经agent合成中的帕累托进化与协同进化
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368113
Y. J. Yau, J. Teo, P. Anthony
Although a number of multi-objective evolutionary algorithms (MOEAs) have been proposed over the last two decades, very few studies have utilized MOEAs for game agent synthesis. Recently, we have suggested a co-evolutionary implementation using the Pareto evolutionary programming (PEP) algorithm. This paper describes a series of experiments using PEP for evolving artificial neural networks (ANNs) that act as game-playing agents. Three systems are compared: (i) a canonical PEP system, (ii) a co-evolving PEP system (PCEP) with 3 different setups, and (iii) a co-evolving PEP system that uses an archive (PCEP-A) with 3 different setups. The aim of this study is to provide insights on the effects of including co-evolutionary techniques on a MOEA by investigating and comparing these 3 different approaches in evolving intelligent agents as both first and second players in a deterministic zero-sum board game. The results indicate that the canonical PEP system outperformed both co-evolutionary PEP systems as it was able to evolve ANN agents with higher quality game-playing performance as both first and second game players. Hence, this study shows that a canonical MOEA without co-evolution is desirable for the synthesis of cognitive game AI agents
尽管在过去的二十年中已经提出了许多多目标进化算法(moea),但很少有研究将moea用于博弈代理合成。最近,我们提出了一种使用Pareto进化规划(PEP)算法的协同进化实现。本文描述了一系列使用PEP进行进化的人工神经网络(ann)作为博弈代理的实验。比较了三个系统:(i)规范PEP系统,(ii)具有3种不同设置的共同发展PEP系统(PCEP),以及(iii)使用具有3种不同设置的存档(ppep - a)的共同发展PEP系统。本研究的目的是通过调查和比较这三种不同的方法,在确定性零和棋盘游戏中进化智能代理作为第一和第二玩家的影响,提供包括共同进化技术对MOEA的影响。结果表明,规范PEP系统优于两种协同进化PEP系统,因为它能够进化出具有更高质量游戏性能的人工神经网络代理作为第一和第二游戏参与者。因此,本研究表明,没有共同进化的典型MOEA对于认知游戏AI代理的合成是可取的
{"title":"Pareto Evolution and Co-Evolution in Cognitive Neural Agents Synthesis for Tic-Tac-Toe","authors":"Y. J. Yau, J. Teo, P. Anthony","doi":"10.1109/CIG.2007.368113","DOIUrl":"https://doi.org/10.1109/CIG.2007.368113","url":null,"abstract":"Although a number of multi-objective evolutionary algorithms (MOEAs) have been proposed over the last two decades, very few studies have utilized MOEAs for game agent synthesis. Recently, we have suggested a co-evolutionary implementation using the Pareto evolutionary programming (PEP) algorithm. This paper describes a series of experiments using PEP for evolving artificial neural networks (ANNs) that act as game-playing agents. Three systems are compared: (i) a canonical PEP system, (ii) a co-evolving PEP system (PCEP) with 3 different setups, and (iii) a co-evolving PEP system that uses an archive (PCEP-A) with 3 different setups. The aim of this study is to provide insights on the effects of including co-evolutionary techniques on a MOEA by investigating and comparing these 3 different approaches in evolving intelligent agents as both first and second players in a deterministic zero-sum board game. The results indicate that the canonical PEP system outperformed both co-evolutionary PEP systems as it was able to evolve ANN agents with higher quality game-playing performance as both first and second game players. Hence, this study shows that a canonical MOEA without co-evolution is desirable for the synthesis of cognitive game AI agents","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132771329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Comparison of Genetic Programming and Look-up Table Learning for the Game of Spoof 遗传规划与查找表学习在恶搞游戏中的比较
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368080
M. Wittkamp, L. Barone, Lyndon While
Many games require opponent modeling for optimal performance. The implicit learning and adaptive nature of evolutionary computation techniques offer a natural way to develop and explore models of an opponent's strategy without significant overhead. In this paper, we compare two learning techniques for strategy development in the game of Spoof, a simple guessing game of imperfect information. We compare a genetic programming approach with a look-up table based approach, contrasting the performance of each in different scenarios of the game. Results show both approaches have their advantages, but that the genetic programming approach achieves better performance in scenarios with little public information. We also trial both approaches against opponents who vary their strategy; results showing that the genetic programming approach is better able to respond to strategy changes than the look-up table based approach
许多游戏需要对手建模以获得最佳性能。进化计算技术的内隐学习和自适应特性提供了一种自然的方法来开发和探索对手的策略模型,而不会产生很大的开销。在本文中,我们比较了两种学习技术在欺骗游戏(一种简单的不完全信息的猜谜游戏)中的策略发展。我们比较了遗传编程方法和基于查找表的方法,对比了每种方法在不同游戏场景中的表现。结果表明,两种方法都有各自的优点,但遗传规划方法在公共信息较少的情况下具有更好的性能。我们也会尝试这两种方法来对付不同策略的对手;结果表明,遗传规划方法比基于查找表的方法能够更好地响应策略变化
{"title":"A Comparison of Genetic Programming and Look-up Table Learning for the Game of Spoof","authors":"M. Wittkamp, L. Barone, Lyndon While","doi":"10.1109/CIG.2007.368080","DOIUrl":"https://doi.org/10.1109/CIG.2007.368080","url":null,"abstract":"Many games require opponent modeling for optimal performance. The implicit learning and adaptive nature of evolutionary computation techniques offer a natural way to develop and explore models of an opponent's strategy without significant overhead. In this paper, we compare two learning techniques for strategy development in the game of Spoof, a simple guessing game of imperfect information. We compare a genetic programming approach with a look-up table based approach, contrasting the performance of each in different scenarios of the game. Results show both approaches have their advantages, but that the genetic programming approach achieves better performance in scenarios with little public information. We also trial both approaches against opponents who vary their strategy; results showing that the genetic programming approach is better able to respond to strategy changes than the look-up table based approach","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127867521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents 多层神经网络在Xpilot agent控制中的演化
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368103
M. Parker, G. Parker
Learning controllers for the space combat game Xpilot is a difficult problem. Using evolutionary computation to evolve the weights for a neural network could create an effective/adaptive controller that does not require extensive programmer input. Previous attempts have been successful in that the controlled agents were transformed from aimless wanderers into interactive agents, but these methods have not resulted in controllers that are competitive with those learned using other methods. In this paper, we present a neural network learning method that uses a genetic algorithm to select the network inputs and node thresholds, along with connection weights, to evolve competitive Xpilot agents
学习太空战斗游戏《Xpilot》的控制器是一个难题。使用进化计算来进化神经网络的权重可以创建一个有效的/自适应控制器,而不需要大量的程序员输入。以前的尝试已经成功地将被控制的代理从漫无目的的漫游者转变为交互式代理,但这些方法并没有产生与使用其他方法学习的控制器竞争的控制器。在本文中,我们提出了一种神经网络学习方法,该方法使用遗传算法来选择网络输入和节点阈值,以及连接权重,以进化竞争的Xpilot代理
{"title":"The Evolution of Multi-Layer Neural Networks for the Control of Xpilot Agents","authors":"M. Parker, G. Parker","doi":"10.1109/CIG.2007.368103","DOIUrl":"https://doi.org/10.1109/CIG.2007.368103","url":null,"abstract":"Learning controllers for the space combat game Xpilot is a difficult problem. Using evolutionary computation to evolve the weights for a neural network could create an effective/adaptive controller that does not require extensive programmer input. Previous attempts have been successful in that the controlled agents were transformed from aimless wanderers into interactive agents, but these methods have not resulted in controllers that are competitive with those learned using other methods. In this paper, we present a neural network learning method that uses a genetic algorithm to select the network inputs and node thresholds, along with connection weights, to evolve competitive Xpilot agents","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115501382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Bridge Bidding with Imperfect Information 不完全信息桥牌竞价
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368122
L. DeLooze, J. Downey
Multiplayer games with imperfect information, such as Bridge, are especially challenging for game theory researchers. Although several algorithmic techniques have been successfully applied to the card play phase of the game, bidding requires a much different approach. We have shown that a special form of a neural network, called a self-organizing map (SOM), can be used to effectively bid no trump hands. The characteristic boundary that forms between resulting neighboring nodes in a SOM is an ideal mechanism for modeling the imprecise and ambiguous nature of the game
具有不完全信息的多人游戏,如桥牌,对博弈论研究者来说尤其具有挑战性。尽管一些算法技术已经成功地应用于游戏的纸牌游戏阶段,但叫牌需要一个非常不同的方法。我们已经证明了一种特殊形式的神经网络,称为自组织映射(SOM),可以有效地用于无王牌手。在SOM中产生的相邻节点之间形成的特征边界是模拟游戏的不精确和模糊性质的理想机制
{"title":"Bridge Bidding with Imperfect Information","authors":"L. DeLooze, J. Downey","doi":"10.1109/CIG.2007.368122","DOIUrl":"https://doi.org/10.1109/CIG.2007.368122","url":null,"abstract":"Multiplayer games with imperfect information, such as Bridge, are especially challenging for game theory researchers. Although several algorithmic techniques have been successfully applied to the card play phase of the game, bidding requires a much different approach. We have shown that a special form of a neural network, called a self-organizing map (SOM), can be used to effectively bid no trump hands. The characteristic boundary that forms between resulting neighboring nodes in a SOM is an ideal mechanism for modeling the imprecise and ambiguous nature of the game","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124130074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Inferring the Past: A Computational Exploration of the Strategies that May Have Been Used in the Aztec Board Game of Patolli 推断过去:对阿兹特克人的Patolli棋盘游戏中可能使用的策略的计算探索
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368112
A. Garza, C. Flores
In this paper we use computational techniques to explore the Aztec board game of Patolli. Rules for the game were documented by the Spanish explorers that ultimately destroyed the Aztec civilization, yet there is no guarantee that the few players of Patolli that still exist follow the same strategies as the Aztec originators of the game. We implemented the rules of the game in an agent-based system and designed a series of experiments to pit game-playing agents using different strategies against each other to try to infer what makes a good strategy (and therefore what kind of information would have been taken into account by expert Aztec players back in the days when Patolli was an extremely popular game). In this paper we describe the game, explain our implementation, and present our experimental setup, results and conclusions.
在本文中,我们使用计算技术来探索阿兹特克的棋盘游戏帕托利。最终摧毁了阿兹特克文明的西班牙探险家记录了游戏规则,但无法保证少数仍然存在的Patolli玩家遵循与游戏的阿兹特克创始人相同的策略。我们在基于代理的系统中执行游戏规则,并设计了一系列实验,让使用不同策略的游戏代理相互对抗,试图推断出什么是好的策略(因此,当Patolli是一款非常受欢迎的游戏时,阿兹特克专家玩家会考虑什么样的信息)。在本文中,我们描述了游戏,解释了我们的实现,并给出了我们的实验设置,结果和结论。
{"title":"Inferring the Past: A Computational Exploration of the Strategies that May Have Been Used in the Aztec Board Game of Patolli","authors":"A. Garza, C. Flores","doi":"10.1109/CIG.2007.368112","DOIUrl":"https://doi.org/10.1109/CIG.2007.368112","url":null,"abstract":"In this paper we use computational techniques to explore the Aztec board game of Patolli. Rules for the game were documented by the Spanish explorers that ultimately destroyed the Aztec civilization, yet there is no guarantee that the few players of Patolli that still exist follow the same strategies as the Aztec originators of the game. We implemented the rules of the game in an agent-based system and designed a series of experiments to pit game-playing agents using different strategies against each other to try to infer what makes a good strategy (and therefore what kind of information would have been taken into account by expert Aztec players back in the days when Patolli was an extremely popular game). In this paper we describe the game, explain our implementation, and present our experimental setup, results and conclusions.","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132069380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a Competitive Pool Playing Robot: Is Computational Intelligence Needed to Play Robotic Pool? 走向有竞争力的台球机器人:玩机器人台球需要计算智能吗?
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368124
M. Greenspan, Joseph Lam, W. Leckie, Marc Godard, Imran Zaidi, Ken Anderson, Donna C. Dupuis, Sam Jordan
This paper describes the development of Deep Green, an intelligent robotic system that is currently in development to play competitive pool against a proficient human opponent. The design philosophy and the main system components are presented, and the progress to date is summarized. We also address a common misconception about the game of pool, i.e. that it is purely a game of physical skill, requiring little or no intelligence or strategy. We explain some of the difficulties in developing a vision-based system with a high degree of positional accuracy. We further demonstrate that even if perfect accuracy were possible, it is still beneficial and necessary to play strategically.
本文描述了Deep Green的开发,这是一个智能机器人系统,目前正在开发中,可以与熟练的人类对手进行竞技台球比赛。介绍了系统的设计思想和主要组成部分,并对目前的研究进展进行了总结。我们也要解决一个关于台球游戏的普遍误解,即它纯粹是一种身体技能的游戏,很少或根本不需要智力或策略。我们解释了开发具有高度定位精度的基于视觉的系统的一些困难。我们进一步证明,即使完美的准确性是可能的,战略性的发挥仍然是有益的和必要的。
{"title":"Toward a Competitive Pool Playing Robot: Is Computational Intelligence Needed to Play Robotic Pool?","authors":"M. Greenspan, Joseph Lam, W. Leckie, Marc Godard, Imran Zaidi, Ken Anderson, Donna C. Dupuis, Sam Jordan","doi":"10.1109/CIG.2007.368124","DOIUrl":"https://doi.org/10.1109/CIG.2007.368124","url":null,"abstract":"This paper describes the development of Deep Green, an intelligent robotic system that is currently in development to play competitive pool against a proficient human opponent. The design philosophy and the main system components are presented, and the progress to date is summarized. We also address a common misconception about the game of pool, i.e. that it is purely a game of physical skill, requiring little or no intelligence or strategy. We explain some of the difficulties in developing a vision-based system with a high degree of positional accuracy. We further demonstrate that even if perfect accuracy were possible, it is still beneficial and necessary to play strategically.","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The Game of Synchronized Cutcake 同步切饼游戏
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368123
A. Cincotti, H. Iida
In synchronized games the players make their moves simultaneously and, as a consequence, the concept of turn does not exist. Synchronized Cutcake is the synchronized version of Cutcake, a classical two-player combinatorial game. Even though to determine the solution of Cutcake is trivial, solving Synchronized Cutcake is challenging because of the calculation of the game's value. We present the solution for small board size and some general results for a board of arbitrary size
在同步游戏中,玩家同时移动,因此不存在回合的概念。《同步切饼》是经典双人组合游戏《切饼》的同步版。尽管确定Cutcake的解决方案很简单,但解决Synchronized Cutcake却很有挑战性,因为要计算游戏的价值。我们给出了小板尺寸的解决方案以及任意尺寸板的一般结果
{"title":"The Game of Synchronized Cutcake","authors":"A. Cincotti, H. Iida","doi":"10.1109/CIG.2007.368123","DOIUrl":"https://doi.org/10.1109/CIG.2007.368123","url":null,"abstract":"In synchronized games the players make their moves simultaneously and, as a consequence, the concept of turn does not exist. Synchronized Cutcake is the synchronized version of Cutcake, a classical two-player combinatorial game. Even though to determine the solution of Cutcake is trivial, solving Synchronized Cutcake is challenging because of the calculation of the game's value. We present the solution for small board size and some general results for a board of arbitrary size","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Using a Genetic Algorithm to Explore A*-like Pathfinding Algorithms 用遗传算法探索类a *寻路算法
Pub Date : 2007-04-01 DOI: 10.1109/CIG.2007.368081
Ryan E. Leigh, S. Louis, C. Miles
We use a genetic algorithm to explore the space of pathfinding algorithms in Lagoon, a 3D naval real-time strategy game and training simulation. To aid in training, Lagoon tries to provide a rich environment with many agents (boats) that maneuver realistically. A*, the traditional pathfinding algorithm in games is computationally expensive when run for many agents and A* paths quickly lose validity as agents move. Although there is a large literature targeted at making A* implementations faster, we want believability and optimal paths may not be believable. In this paper we use a genetic algorithm to search the space of network search algorithms like A* to find new pathfinding algorithms that are near-optimal, fast, and believable. Our results indicate that the genetic algorithm can explore this space well and that novel pathfinding algorithms (found by our genetic algorithm) quickly find near-optimal, more-believable paths in Lagoon
我们使用遗传算法来探索寻路算法的空间,泻湖,一个三维海军实时战略游戏和训练模拟。为了帮助训练,Lagoon试图提供一个丰富的环境,其中有许多真实机动的代理(船)。A*,游戏中的传统寻径算法在运行许多代理时计算成本很高,并且随着代理的移动,A*路径很快失去有效性。尽管有大量的文献以使a *实现更快为目标,但我们想要的是可信度,而最优路径可能不可信。在本文中,我们使用遗传算法来搜索网络搜索算法(如a *)的空间,以寻找接近最优、快速和可信的新寻路算法。我们的研究结果表明,遗传算法可以很好地探索这个空间,并且新的寻径算法(由我们的遗传算法发现)可以快速找到泻湖中接近最优的,更可信的路径
{"title":"Using a Genetic Algorithm to Explore A*-like Pathfinding Algorithms","authors":"Ryan E. Leigh, S. Louis, C. Miles","doi":"10.1109/CIG.2007.368081","DOIUrl":"https://doi.org/10.1109/CIG.2007.368081","url":null,"abstract":"We use a genetic algorithm to explore the space of pathfinding algorithms in Lagoon, a 3D naval real-time strategy game and training simulation. To aid in training, Lagoon tries to provide a rich environment with many agents (boats) that maneuver realistically. A*, the traditional pathfinding algorithm in games is computationally expensive when run for many agents and A* paths quickly lose validity as agents move. Although there is a large literature targeted at making A* implementations faster, we want believability and optimal paths may not be believable. In this paper we use a genetic algorithm to search the space of network search algorithms like A* to find new pathfinding algorithms that are near-optimal, fast, and believable. Our results indicate that the genetic algorithm can explore this space well and that novel pathfinding algorithms (found by our genetic algorithm) quickly find near-optimal, more-believable paths in Lagoon","PeriodicalId":365269,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence and Games","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115838950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
期刊
2007 IEEE Symposium on Computational Intelligence and Games
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1