首页 > 最新文献

2013 IEEE Conference on Computational Inteligence in Games (CIG)最新文献

英文 中文
Reactive strategy choice in StarCraft by means of Fuzzy Control 基于模糊控制的《星际争霸》反应性策略选择
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633627
M. Preuss, Daniel Kozakowski, Johan Hagelbäck, H. Trautmann
Current StarCraft bots are not very flexible in their strategy choice, most of them just follow a manually optimized one, usually a rush. We suggest a method of augmenting existing bots via Fuzzy Control in order to make them react on the current game situation. According to the available information, the best matching of a pool of strategies is chosen. While the method is very general and can be applied easily to many bots, we implement it for the existing BTHAI bot and show experimentally how the modifications affects its gameplay, and how it is improved compared to the original version.
目前的《星际争霸》机器人在策略选择上并不是很灵活,它们中的大多数只是遵循手动优化的策略,通常是匆忙的。我们建议一种通过模糊控制增强现有机器人的方法,以使它们对当前游戏情况做出反应。根据现有信息,选择策略池中的最佳匹配策略。虽然该方法非常通用,可以很容易地应用于许多机器人,但我们将其用于现有的BTHAI机器人,并通过实验展示修改如何影响其游戏玩法,以及与原始版本相比如何改进。
{"title":"Reactive strategy choice in StarCraft by means of Fuzzy Control","authors":"M. Preuss, Daniel Kozakowski, Johan Hagelbäck, H. Trautmann","doi":"10.1109/CIG.2013.6633627","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633627","url":null,"abstract":"Current StarCraft bots are not very flexible in their strategy choice, most of them just follow a manually optimized one, usually a rush. We suggest a method of augmenting existing bots via Fuzzy Control in order to make them react on the current game situation. According to the available information, the best matching of a pool of strategies is chosen. While the method is very general and can be applied easily to many bots, we implement it for the existing BTHAI bot and show experimentally how the modifications affects its gameplay, and how it is improved compared to the original version.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132721714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Adaptive game level creation through rank-based interactive evolution 通过基于等级的互动进化创造自适应游戏关卡
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633651
Antonios Liapis, H. P. Martínez, J. Togelius, Georgios N. Yannakakis
This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.
本文介绍了基于秩的交互进化(RIE),它是由用户偏好计算模型驱动的交互进化的一种替代方案,以生成个性化内容。在RIE中,计算模型适应用户的偏好,而用户的偏好又被用作优化生成内容的适应度函数。通过基于排序的偏好学习建立偏好模型,通过进化搜索生成内容。在策略游戏地图的创建上对该方法进行了评价,并使用人工智能体对其性能进行了测试。结果表明,RIE比标准交互进化更快、更健壮,并且优于其他最先进的交互进化方法。
{"title":"Adaptive game level creation through rank-based interactive evolution","authors":"Antonios Liapis, H. P. Martínez, J. Togelius, Georgios N. Yannakakis","doi":"10.1109/CIG.2013.6633651","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633651","url":null,"abstract":"This paper introduces Rank-based Interactive Evolution (RIE) which is an alternative to interactive evolution driven by computational models of user preferences to generate personalized content. In RIE, the computational models are adapted to the preferences of users which, in turn, are used as fitness functions for the optimization of the generated content. The preference models are built via ranking-based preference learning, while the content is generated via evolutionary search. The proposed method is evaluated on the creation of strategy game maps, and its performance is tested using artificial agents. Results suggest that RIE is both faster and more robust than standard interactive evolution and outperforms other state-of-the-art interactive evolution approaches.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126076078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Using plan-based reward shaping to learn strategies in StarCraft: Broodwar 在《星际争霸:母巢之战》中使用基于计划的奖励塑造来学习策略
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633622
Kyriakos Efthymiadis, D. Kudenko
StarCraft: Broodwar (SC:BW) is a very popular commercial real strategy game (RTS) which has been extensively used in AI research. Despite being a popular test-bed reinforcement learning (RL) has not been evaluated extensively. A successful attempt was made to show the use of RL in a small-scale combat scenario involving an overpowered agent battling against multiple enemy units [1]. However, the chosen scenario was very small and not representative of the complexity of the game in its entirety. In order to build an RL agent that can manage the complexity of the full game, more efficient approaches must be used to tackle the state-space explosion. In this paper, we demonstrate how plan-based reward shaping can help an agent scale up to larger, more complex scenarios and significantly speed up the learning process as well as how high level planning can be combined with learning focusing on learning the Starcraft strategy, Battlecruiser Rush. We empirically show that the agent with plan-based reward shaping is significantly better both in terms of the learnt policy, as well as convergence speed when compared to baseline approaches which fail at reaching a good enough policy within a practical amount of time.
星际争霸:母巢之战(SC:BW)是一款非常受欢迎的商业真实战略游戏(RTS),在人工智能研究中得到了广泛的应用。尽管强化学习(RL)是一种流行的测试平台,但尚未得到广泛的评估。一个成功的尝试是在小规模的战斗场景中展示RL的使用,其中涉及一个强大的代理与多个敌方单位作战[1]。然而,所选择的场景非常小,不能代表整个游戏的复杂性。为了构建一个能够管理整个游戏复杂性的强化学习代理,必须使用更有效的方法来处理状态空间爆炸。在本文中,我们展示了基于计划的奖励塑造如何帮助智能体扩展到更大、更复杂的场景,并显著加快学习过程,以及如何将高级计划与专注于学习《星际争霸》策略的学习结合起来。我们的经验表明,与在实际时间内无法达到足够好的策略的基线方法相比,具有基于计划的奖励塑造的智能体在学习策略和收敛速度方面都明显更好。
{"title":"Using plan-based reward shaping to learn strategies in StarCraft: Broodwar","authors":"Kyriakos Efthymiadis, D. Kudenko","doi":"10.1109/CIG.2013.6633622","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633622","url":null,"abstract":"StarCraft: Broodwar (SC:BW) is a very popular commercial real strategy game (RTS) which has been extensively used in AI research. Despite being a popular test-bed reinforcement learning (RL) has not been evaluated extensively. A successful attempt was made to show the use of RL in a small-scale combat scenario involving an overpowered agent battling against multiple enemy units [1]. However, the chosen scenario was very small and not representative of the complexity of the game in its entirety. In order to build an RL agent that can manage the complexity of the full game, more efficient approaches must be used to tackle the state-space explosion. In this paper, we demonstrate how plan-based reward shaping can help an agent scale up to larger, more complex scenarios and significantly speed up the learning process as well as how high level planning can be combined with learning focusing on learning the Starcraft strategy, Battlecruiser Rush. We empirically show that the agent with plan-based reward shaping is significantly better both in terms of the learnt policy, as well as convergence speed when compared to baseline approaches which fail at reaching a good enough policy within a practical amount of time.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125028997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Creating large numbers of game AIs by learning behavior for cooperating units 通过学习合作单位的行为创造大量的游戏ai
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633608
Stephen Wiens, J. Denzinger, Sanjeev Paskaradevan
We present two improvements to the hybrid learning method for the shout-ahead architecture for units in the game Battle for Wesnoth. The shout-ahead architecture allows for units to perform decision making in two stages, first determining an action without knowledge of the intentions of other units, then, after communicating the intended action and likewise receiving the intentions of the other units, taking these intentions into account for the final decision on the next action. The decision making uses two rule sets and reinforcement learning is used to learn rule weights (that influence decision making), while evolutionary learning is used to evolve good rule sets. Our improvements add knowledge about terrain to the learning and also evaluate unit behaviors on several scenario maps to learn more general rules. The use of terrain knowledge resulted in improvements in the win percentage of evolved teams between 3 and 14 percentage points for different maps, while using several maps to learn from resulted in nearly similar win percentages on maps not learned from as on the maps learned from.
我们对游戏《Battle for Wesnoth》中单位的呼喊提前架构的混合学习方法进行了两项改进。提前喊话架构允许单位分两个阶段执行决策,首先在不知道其他单位意图的情况下决定行动,然后在传达预期行动并同样接收到其他单位的意图后,将这些意图考虑到下一个行动的最终决定。决策使用两个规则集,强化学习用于学习规则权重(影响决策),而进化学习用于进化好的规则集。我们的改进增加了关于地形的知识,并在几个场景地图上评估单位的行为,以学习更多的一般规则。地形知识的使用使进化团队在不同地图上的胜率提高了3到14个百分点,而使用几张地图进行学习,在没有学习的地图上的胜率和学习过的地图上的胜率几乎相同。
{"title":"Creating large numbers of game AIs by learning behavior for cooperating units","authors":"Stephen Wiens, J. Denzinger, Sanjeev Paskaradevan","doi":"10.1109/CIG.2013.6633608","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633608","url":null,"abstract":"We present two improvements to the hybrid learning method for the shout-ahead architecture for units in the game Battle for Wesnoth. The shout-ahead architecture allows for units to perform decision making in two stages, first determining an action without knowledge of the intentions of other units, then, after communicating the intended action and likewise receiving the intentions of the other units, taking these intentions into account for the final decision on the next action. The decision making uses two rule sets and reinforcement learning is used to learn rule weights (that influence decision making), while evolutionary learning is used to evolve good rule sets. Our improvements add knowledge about terrain to the learning and also evaluate unit behaviors on several scenario maps to learn more general rules. The use of terrain knowledge resulted in improvements in the win percentage of evolved teams between 3 and 14 percentage points for different maps, while using several maps to learn from resulted in nearly similar win percentages on maps not learned from as on the maps learned from.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128381331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
LGOAP: Adaptive layered planning for real-time videogames LGOAP:实时电子游戏的自适应分层规划
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633624
G. Maggiore, Carlos Santos, D. Dini, Frank Peters, Ha Bouwknegt, P. Spronck
One of the main aims of game AI research is the building of challenging and believable artificial opponents that act as if capable of strategic thinking. In this paper we describe a novel mechanism that successfully endows NPCs in real-time games with strategic planning capabilities. Our approach creates adaptive behaviours that take into account long-term and short term consequences. Our approach is unique in that: (i) it is sufficiently fast to be used for hundreds of agents in real time; (ii) it is flexible in that it requires no previous knowledge of the playing field; and (iii) it allows customization of the agents in order to generate differentiated behaviours that derive from virtual personalities.
游戏AI研究的主要目标之一是构建具有挑战性且可信的人工对手,这些对手似乎具有战略思维能力。在本文中,我们描述了一种能够成功赋予实时游戏中的npc战略规划能力的新机制。我们的方法创造了考虑长期和短期后果的适应性行为。我们的方法是独一无二的:(i)它足够快,可以实时用于数百个代理;(ii)具有灵活性,不需要事先了解竞争环境;(iii)它允许对代理进行定制,以产生源自虚拟人格的差异化行为。
{"title":"LGOAP: Adaptive layered planning for real-time videogames","authors":"G. Maggiore, Carlos Santos, D. Dini, Frank Peters, Ha Bouwknegt, P. Spronck","doi":"10.1109/CIG.2013.6633624","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633624","url":null,"abstract":"One of the main aims of game AI research is the building of challenging and believable artificial opponents that act as if capable of strategic thinking. In this paper we describe a novel mechanism that successfully endows NPCs in real-time games with strategic planning capabilities. Our approach creates adaptive behaviours that take into account long-term and short term consequences. Our approach is unique in that: (i) it is sufficiently fast to be used for hundreds of agents in real time; (ii) it is flexible in that it requires no previous knowledge of the playing field; and (iii) it allows customization of the agents in order to generate differentiated behaviours that derive from virtual personalities.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115483097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic generation and analysis of physics-based puzzle games 自动生成和分析基于物理的益智游戏
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633633
M. Shaker, Mhd Hasan Sarhan, Ola Al Naameh, Noor Shaker, J. Togelius
In this paper we present a method for the automatic generation of content for the physics-based puzzle game Cut The Rope. An evolutionary game generator is implemented which evolves the design of levels based on a context-free grammar. We present various measures for analyzing the expressivity of the generator and visualizing the space of content covered. We further perform an experiment on evolving playable content of the game and present and analyze the results obtained.
在本文中,我们提出了一种为基于物理的益智游戏《割绳子》自动生成内容的方法。我们执行了一个进化游戏生成器,它基于与上下文无关的语法来进化关卡设计。我们提出了各种方法来分析生成器的表现力和可视化所覆盖的内容空间。我们进一步对游戏的可玩性内容进行进化实验,并呈现和分析所获得的结果。
{"title":"Automatic generation and analysis of physics-based puzzle games","authors":"M. Shaker, Mhd Hasan Sarhan, Ola Al Naameh, Noor Shaker, J. Togelius","doi":"10.1109/CIG.2013.6633633","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633633","url":null,"abstract":"In this paper we present a method for the automatic generation of content for the physics-based puzzle game Cut The Rope. An evolutionary game generator is implemented which evolves the design of levels based on a context-free grammar. We present various measures for analyzing the expressivity of the generator and visualizing the space of content covered. We further perform an experiment on evolving playable content of the game and present and analyze the results obtained.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122806671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Analytics-driven dynamic game adaption for player retention in Scrabble 《Scrabble》中玩家留存率的分析驱动动态游戏调整
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633632
Brent E. Harrison, D. Roberts
This paper shows how game analytics can be used in conjunction with an adaptive system in order to increase player retention at the level of individual game sessions in Scrabblesque, a Flash game based on the popular board game Scrabble. In this paper, we use game analytic knowledge to create a simplified search space (called the game analytic space) of board states. We then target a distribution of game analytic states that are predictive of players playing a complete game session of Scrabblesque in order to increase player retention. Our adaptive system then has a computer-controlled AI opponent take moves that will help realize this distribution of game analytic states with the ultimate goal of reducing the quitting rate. We test this system by performing a user study in which we compare how many people quit playing the adaptive version of Scrabblesque early and how many people quit playing a nonadaptive version of Scrabblesque early. We also compare how well the adaptive version of Scrabblesque was able to influence player behavior as described by game analytics. Our results show that our adaptive system is able to produce a significant reduction in the quitting rate (p = 0.03) when compared to the non-adaptive version. In addition, the adaptive version of Scrabblesque is able to better fit a target distribution of game analytic states when compared to the non-adaptive version.
本文展示了如何将游戏分析与自适应系统结合在一起,从而提高《Scrabblesque》(基于流行桌面游戏《Scrabble》的Flash游戏)的玩家留存率。在本文中,我们利用博弈分析知识创建了一个简化的棋盘状态搜索空间(称为博弈分析空间)。然后我们瞄准游戏分析状态的分布,预测玩家是否会玩完《Scrabblesque》的完整游戏回合,从而提高玩家留存率。然后,我们的自适应系统有一个计算机控制的AI对手采取行动,这将有助于实现游戏分析状态的分布,最终目标是减少退出率。我们通过一项用户研究来测试这个系统,我们比较了有多少人在早期退出适应性版本的《Scrabblesque》,以及有多少人在早期退出非适应性版本的《Scrabblesque》。我们还比较了《Scrabblesque》的适应性版本如何影响游戏分析所描述的玩家行为。我们的结果表明,与非自适应版本相比,我们的自适应系统能够显著降低退出率(p = 0.03)。此外,与非自适应版本相比,自适应版本的Scrabblesque能够更好地拟合游戏分析状态的目标分布。
{"title":"Analytics-driven dynamic game adaption for player retention in Scrabble","authors":"Brent E. Harrison, D. Roberts","doi":"10.1109/CIG.2013.6633632","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633632","url":null,"abstract":"This paper shows how game analytics can be used in conjunction with an adaptive system in order to increase player retention at the level of individual game sessions in Scrabblesque, a Flash game based on the popular board game Scrabble. In this paper, we use game analytic knowledge to create a simplified search space (called the game analytic space) of board states. We then target a distribution of game analytic states that are predictive of players playing a complete game session of Scrabblesque in order to increase player retention. Our adaptive system then has a computer-controlled AI opponent take moves that will help realize this distribution of game analytic states with the ultimate goal of reducing the quitting rate. We test this system by performing a user study in which we compare how many people quit playing the adaptive version of Scrabblesque early and how many people quit playing a nonadaptive version of Scrabblesque early. We also compare how well the adaptive version of Scrabblesque was able to influence player behavior as described by game analytics. Our results show that our adaptive system is able to produce a significant reduction in the quitting rate (p = 0.03) when compared to the non-adaptive version. In addition, the adaptive version of Scrabblesque is able to better fit a target distribution of game analytic states when compared to the non-adaptive version.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125782267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Give me a reason to dig Minecraft and psychology of motivation 给我一个挖掘《我的世界》和动机心理学的理由
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633612
Alessandro Canossa, Josep B. Martinez, J. Togelius
Recently both game industry professionals and academic researchers have started focusing on player-generated behavioral data as a mean to gather insights on player psychology through datamining. Although some research has already proven solid correlations between in-game behavior and personality, most techniques focus on extracting knowledge from in-game behavior data alone. This paper posits that triangulating exclusively behavioral datasets with established theoretical frameworks serving as hermeneutic grids, may help extracting additional meaning and information. The hermeneutic grid selected for this study is the Reiss Motivation Profiler and it is applied to behavioral data gathered from Minecraft players.
最近,游戏行业专业人士和学术研究人员都开始关注玩家生成的行为数据,以此作为通过数据挖掘收集玩家心理信息的手段。尽管一些研究已经证明了游戏内部行为和个性之间的相关性,但大多数技术都只关注于从游戏内部行为数据中提取知识。本文认为,将行为数据集与已建立的理论框架作为解释学网格进行三角测量,可能有助于提取额外的意义和信息。本研究选择的解释学网格是Reiss动机分析器,它应用于从《我的世界》玩家收集的行为数据。
{"title":"Give me a reason to dig Minecraft and psychology of motivation","authors":"Alessandro Canossa, Josep B. Martinez, J. Togelius","doi":"10.1109/CIG.2013.6633612","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633612","url":null,"abstract":"Recently both game industry professionals and academic researchers have started focusing on player-generated behavioral data as a mean to gather insights on player psychology through datamining. Although some research has already proven solid correlations between in-game behavior and personality, most techniques focus on extracting knowledge from in-game behavior data alone. This paper posits that triangulating exclusively behavioral datasets with established theoretical frameworks serving as hermeneutic grids, may help extracting additional meaning and information. The hermeneutic grid selected for this study is the Reiss Motivation Profiler and it is applied to behavioral data gathered from Minecraft players.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132164739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Portfolio greedy search and simulation for large-scale combat in starcraft 星际争霸中大规模战斗的组合贪婪搜索与模拟
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633643
David Churchill, M. Buro
Real-time strategy video games have proven to be a very challenging area for applications of artificial intelligence research. With their vast state and action spaces and real-time constraints, existing AI solutions have been shown to be too slow, or only able to be applied to small problem sets, while human players still dominate RTS AI systems. This paper makes three contributions to advancing the state of AI for popular commercial RTS game combat, which can consist of battles of dozens of units. First, we present an efficient system for modelling abstract RTS combat called SparCraft, which can perform millions of unit actions per second and visualize them. We then present a modification of the UCT algorithm capable of performing search in games with simultaneous and durative actions. Finally, a novel greedy search algorithm called Portfolio Greedy Search is presented which uses hill climbing and accurate playout-based evaluations to efficiently search even the largest combat scenarios. We demonstrate that Portfolio Greedy Search outperforms state of the art Alpha-Beta and UCT search methods for large StarCraft combat scenarios of up to 50 vs. 50 units under real-time search constraints of 40 ms per search episode.
即时战略电子游戏已被证明是人工智能应用研究的一个非常具有挑战性的领域。由于它们的巨大状态和行动空间以及实时限制,现有的AI解决方案已经被证明太慢,或者只能应用于小问题集,而人类玩家仍然主导着RTS AI系统。本文为提高流行的商业RTS游戏战斗的AI状态做出了三方面的贡献,这些战斗可能包含数十个单位的战斗。首先,我们提出了一个有效的系统,用于模拟抽象的RTS战斗,称为SparCraft,它可以每秒执行数百万个单位动作并将其可视化。然后,我们提出了UCT算法的修改,能够在具有同步和持续动作的游戏中执行搜索。最后,提出了一种新的贪心搜索算法组合贪心搜索,该算法利用爬坡和基于精确播放的评估来有效地搜索最大的战斗场景。我们证明了组合贪婪搜索在每个搜索章节40毫秒的实时搜索约束下,在多达50个单位的大型星际争霸战斗场景中优于最先进的Alpha-Beta和UCT搜索方法。
{"title":"Portfolio greedy search and simulation for large-scale combat in starcraft","authors":"David Churchill, M. Buro","doi":"10.1109/CIG.2013.6633643","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633643","url":null,"abstract":"Real-time strategy video games have proven to be a very challenging area for applications of artificial intelligence research. With their vast state and action spaces and real-time constraints, existing AI solutions have been shown to be too slow, or only able to be applied to small problem sets, while human players still dominate RTS AI systems. This paper makes three contributions to advancing the state of AI for popular commercial RTS game combat, which can consist of battles of dozens of units. First, we present an efficient system for modelling abstract RTS combat called SparCraft, which can perform millions of unit actions per second and visualize them. We then present a modification of the UCT algorithm capable of performing search in games with simultaneous and durative actions. Finally, a novel greedy search algorithm called Portfolio Greedy Search is presented which uses hill climbing and accurate playout-based evaluations to efficiently search even the largest combat scenarios. We demonstrate that Portfolio Greedy Search outperforms state of the art Alpha-Beta and UCT search methods for large StarCraft combat scenarios of up to 50 vs. 50 units under real-time search constraints of 40 ms per search episode.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134172644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 124
Spatial game analytics and visualization 空间游戏分析和可视化
Pub Date : 2013-10-17 DOI: 10.1109/CIG.2013.6633629
Anders Drachen, Matthias Schubert
The recently emerged field of game analytics and the development and adaptation of business intelligence techniques to support game design and development has given data-driven techniques a direct role in game development. Given that all digital games contain some sort of spatial operation, techniques for spatial analysis had their share in these developments. However, the methods for analyzing and visualizing spatial and spatio-temporal patterns in player behavior being used by the game industry are not as diverse as the range of techniques utilized in game research, leaving room for a continuing development. This paper presents a review of current work on spatial and spatio-temporal game analytics across industry and research, describing and defining the key terminology, outlining current techniques and their application. We summarize the current problems and challenges in the field, and present four key areas of spatial and spatio-temporal analytics: Spatial Outlier Detection, Spatial Clustering, Spatial Predictive Models, Spatial Pattern and Rule Mining. All key areas are well-established outside the context of games and hold the potential to reshape the research roadmap in game analytics.
最近出现的游戏分析领域以及支持游戏设计和开发的商业智能技术的发展和适应,使数据驱动技术在游戏开发中发挥了直接作用。考虑到所有数字游戏都包含某种空间操作,空间分析技术在这些开发中发挥了作用。然而,游戏行业所使用的用于分析和可视化玩家行为的空间和时空模式的方法并不像游戏研究中所使用的技术范围那样多样化,这为继续发展留下了空间。本文回顾了目前在空间和时空游戏分析方面的工作,描述和定义了关键术语,概述了当前的技术及其应用。我们总结了该领域目前存在的问题和挑战,并提出了空间和时空分析的四个关键领域:空间离群点检测、空间聚类、空间预测模型、空间模式和规则挖掘。所有的关键领域都是在游戏环境之外建立起来的,并且具有重塑游戏分析研究路线图的潜力。
{"title":"Spatial game analytics and visualization","authors":"Anders Drachen, Matthias Schubert","doi":"10.1109/CIG.2013.6633629","DOIUrl":"https://doi.org/10.1109/CIG.2013.6633629","url":null,"abstract":"The recently emerged field of game analytics and the development and adaptation of business intelligence techniques to support game design and development has given data-driven techniques a direct role in game development. Given that all digital games contain some sort of spatial operation, techniques for spatial analysis had their share in these developments. However, the methods for analyzing and visualizing spatial and spatio-temporal patterns in player behavior being used by the game industry are not as diverse as the range of techniques utilized in game research, leaving room for a continuing development. This paper presents a review of current work on spatial and spatio-temporal game analytics across industry and research, describing and defining the key terminology, outlining current techniques and their application. We summarize the current problems and challenges in the field, and present four key areas of spatial and spatio-temporal analytics: Spatial Outlier Detection, Spatial Clustering, Spatial Predictive Models, Spatial Pattern and Rule Mining. All key areas are well-established outside the context of games and hold the potential to reshape the research roadmap in game analytics.","PeriodicalId":158902,"journal":{"name":"2013 IEEE Conference on Computational Inteligence in Games (CIG)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121501860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
期刊
2013 IEEE Conference on Computational Inteligence in Games (CIG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1