首页 > 最新文献

2016 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

英文 中文
Transfer learning for cross-game prediction of player experience 跨游戏预测玩家体验的迁移学习
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860415
Noor Shaker, Mohamed Abou-Zleikha
Several studies on cross-domain users' behaviour revealed generic personality trails and behavioural patterns. This paper, proposes quantitative approaches to use the knowledge of player behaviour in one game to seed the process of building player experience models in another. We investigate two settings: in the supervised feature mapping method, we use labeled datasets about players' behaviour in two games. The goal is to establish a mapping between the features so that the models build on one dataset could be used on the other by simple feature replacement. For the unsupervised transfer learning scenario, our goal is to find a shared space of correlated features based on unlabelled data. The features in the shared space are then used to construct models for one game that directly work on the transferred features of the other game. We implemented and analysed the two approaches and we show that transferring the knowledge of player experience between domains is indeed possible and ultimately useful when studying players' behaviour and when designing user studies.
几项关于跨域用户行为的研究揭示了通用的人格轨迹和行为模式。本文提出了一种定量方法,即使用一款游戏中的玩家行为知识来为另一款游戏中的玩家体验模型的构建过程播下种子。我们研究了两种设置:在监督特征映射方法中,我们使用关于两个游戏中玩家行为的标记数据集。目标是建立特征之间的映射,以便在一个数据集上构建的模型可以通过简单的特征替换在另一个数据集上使用。对于无监督迁移学习场景,我们的目标是基于未标记数据找到相关特征的共享空间。然后,共享空间中的功能被用于构建一款游戏的模型,该模型直接作用于另一款游戏的转移功能。我们执行并分析了这两种方法,结果表明,在研究玩家行为和设计用户研究时,在不同领域之间转移玩家体验的知识确实是可能的,并且最终是有用的。
{"title":"Transfer learning for cross-game prediction of player experience","authors":"Noor Shaker, Mohamed Abou-Zleikha","doi":"10.1109/CIG.2016.7860415","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860415","url":null,"abstract":"Several studies on cross-domain users' behaviour revealed generic personality trails and behavioural patterns. This paper, proposes quantitative approaches to use the knowledge of player behaviour in one game to seed the process of building player experience models in another. We investigate two settings: in the supervised feature mapping method, we use labeled datasets about players' behaviour in two games. The goal is to establish a mapping between the features so that the models build on one dataset could be used on the other by simple feature replacement. For the unsupervised transfer learning scenario, our goal is to find a shared space of correlated features based on unlabelled data. The features in the shared space are then used to construct models for one game that directly work on the transferred features of the other game. We implemented and analysed the two approaches and we show that transferring the knowledge of player experience between domains is indeed possible and ultimately useful when studying players' behaviour and when designing user studies.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"100 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80589726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Evolutionary deckbuilding in hearthstone 《炉石传说》中的进化甲板建造
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860426
P. García-Sánchez, A. Tonda, Giovanni Squillero, A. García, J. J. M. Guervós
One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.
可收集卡牌游戏最显著的特点之一是桥牌构建,即在真正的游戏开始前定义个性化的桥牌。牌组构建是一项挑战,它涉及到一个巨大而坚固的搜索空间,在简单的牌组更改后,甚至隐藏的信息都会产生不同且不可预测的行为。在本文中,我们探索了自动化套牌构建的可能性:将遗传算法应用于该任务,并将评估委托给游戏模拟器,该模拟器针对各种具有代表性的人造套牌测试每个潜在的套牌。在这些初步实验中,该方法已被证明能够创建相当有效的甲板,这一有希望的结果证明,即使在这种具有挑战性的环境中,进化算法也可以找到好的解决方案。
{"title":"Evolutionary deckbuilding in hearthstone","authors":"P. García-Sánchez, A. Tonda, Giovanni Squillero, A. García, J. J. M. Guervós","doi":"10.1109/CIG.2016.7860426","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860426","url":null,"abstract":"One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88678100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Three types of forward pruning techniques to apply the alpha beta algorithm to turn-based strategy games 将alpha - beta算法应用于回合制策略游戏的三种前向修剪技术
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860427
Naoyuki Sato, Kokolo Ikeda
Turn-based strategy games are interesting testbeds for developing artificial players because their rules present developers with several challenges. Currently, Monte-Carlo tree search variants are often utilized to address these challenges. However, we consider it worthwhile introducing minimax search variants with pruning techniques because a turn-based strategy is in some points similar to the games of chess and Shogi, in which minimax variants are known to be effective. Thus, we introduced three forward-pruning techniques to enable us to apply alpha beta search (as a minimax search variant) to turn-based strategy games. This type of search involves fixing unit action orders, generating unit actions selectively, and limiting the number of moving units in a search. We applied our proposed pruning methods by implementing an alpha beta-based artificial player in the Turn-based strategy Academic Package (TUBSTAP) open platform of our institute. This player competed against first- and second-rank players in the TUBSTAP AI competition in 2016. Our proposed player won against the other players in five different maps with an average winning ratio exceeding 70%.
回合制策略游戏是开发人工玩家的有趣测试平台,因为它们的规则向开发者呈现了一些挑战。目前,蒙特卡罗树搜索变体经常用于解决这些挑战。然而,我们认为引入带有修剪技术的极大极小搜索变体是值得的,因为基于回合的策略在某些方面类似于国际象棋和Shogi游戏,其中极大极小变体是已知有效的。因此,我们引入了三种前向修剪技术,使我们能够将alpha - beta搜索(作为极大极小搜索变体)应用于回合制策略游戏。这种类型的搜索包括固定单位行动顺序,选择性地生成单位行动,以及限制搜索中移动单位的数量。我们通过在我们研究所的回合制策略学术包(TUBSTAP)开放平台中实现基于alpha beta的人工玩家来应用我们提出的修剪方法。这位选手在2016年的TUBSTAP AI比赛中与一、二线选手竞争。我们建议的玩家在5个不同的地图中战胜其他玩家,平均胜率超过70%。
{"title":"Three types of forward pruning techniques to apply the alpha beta algorithm to turn-based strategy games","authors":"Naoyuki Sato, Kokolo Ikeda","doi":"10.1109/CIG.2016.7860427","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860427","url":null,"abstract":"Turn-based strategy games are interesting testbeds for developing artificial players because their rules present developers with several challenges. Currently, Monte-Carlo tree search variants are often utilized to address these challenges. However, we consider it worthwhile introducing minimax search variants with pruning techniques because a turn-based strategy is in some points similar to the games of chess and Shogi, in which minimax variants are known to be effective. Thus, we introduced three forward-pruning techniques to enable us to apply alpha beta search (as a minimax search variant) to turn-based strategy games. This type of search involves fixing unit action orders, generating unit actions selectively, and limiting the number of moving units in a search. We applied our proposed pruning methods by implementing an alpha beta-based artificial player in the Turn-based strategy Academic Package (TUBSTAP) open platform of our institute. This player competed against first- and second-rank players in the TUBSTAP AI competition in 2016. Our proposed player won against the other players in five different maps with an average winning ratio exceeding 70%.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"115 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91129640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Deep Q-learning using redundant outputs in visual doom 在视觉厄运中使用冗余输出的深度q学习
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860387
Hyun-Soo Park, Kyung-Joong Kim
Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.
最近,人们对深度学习在游戏AI领域的应用越来越感兴趣。其中,深度强化学习在游戏AI社区中最为著名。在本文中,我们提出使用冗余输出来适应深度强化学习的训练进度。并在ViZDoom平台上与一般的ε-greedy进行了比较。由于AI玩家只需要根据平台上的视觉输入来选择一个动作,所以适合深度强化学习的研究。实验结果表明,该方法在不需要参数调整的情况下,比ε-greedy算法具有更好的性能。
{"title":"Deep Q-learning using redundant outputs in visual doom","authors":"Hyun-Soo Park, Kyung-Joong Kim","doi":"10.1109/CIG.2016.7860387","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860387","url":null,"abstract":"Recently, there is a growing interest in applying deep learning in game AI domain. Among them, deep reinforcement learning is the most famous in game AI communities. In this paper, we propose to use redundant outputs in order to adapt training progress in deep reinforcement learning. We compare our method with general ε-greedy in ViZDoom platform. Since AI player should select an action only based on visual input in the platform, it is suitable for deep reinforcement learning research. Experimental results show that our proposed method archives competitive performance to ε-greedy without parameter tuning.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"4 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75474999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Evaluating real-time strategy game states using convolutional neural networks 使用卷积神经网络评估实时策略游戏状态
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860439
Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro
Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.
即时战略(RTS)游戏,如暴雪的《星际争霸》,是快节奏的战争模拟游戏,玩家必须管理经济,控制许多单位,并实时处理敌方单位位置的不确定性。即使在完美的信息环境中,由于巨大的状态和动作空间,以及缺乏良好的状态评估功能和高级动作抽象,构建强AI系统也很困难。直到今天,优秀的人类玩家仍然可以轻松击败最好的RTS游戏AI系统,但鉴于最近深度卷积神经网络(cnn)在计算机围棋中的成功,这种情况可能会在不久的将来发生变化,它展示了网络如何用于准确评估复杂的游戏状态并专注于前瞻性搜索。在本文中,我们提出了一个用于RTS游戏状态评估的CNN,它超越了通常使用的基于材料的评估,还考虑了单位之间的空间关系。我们通过几种最先进的搜索算法进行比赛,将CNN与其他各种评估函数进行比较,从而评估CNN的性能。我们发现,尽管评估速度要慢得多,但平均而言,基于CNN的搜索比简单但快速的评估表现得要好得多。这些有希望的初步结果以及最近在等级搜索方面的进展表明,在RTS游戏中占据主导地位的人类玩家可能并不遥远。
{"title":"Evaluating real-time strategy game states using convolutional neural networks","authors":"Marius Stanescu, Nicolas A. Barriga, Andy Hess, M. Buro","doi":"10.1109/CIG.2016.7860439","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860439","url":null,"abstract":"Real-time strategy (RTS) games, such as Blizzard's StarCraft, are fast paced war simulation games in which players have to manage economies, control many dozens of units, and deal with uncertainty about opposing unit locations in real-time. Even in perfect information settings, constructing strong AI systems has been difficult due to enormous state and action spaces and the lack of good state evaluation functions and high-level action abstractions. To this day, good human players are still handily defeating the best RTS game AI systems, but this may change in the near future given the recent success of deep convolutional neural networks (CNNs) in computer Go, which demonstrated how networks can be used for evaluating complex game states accurately and to focus look-ahead search. In this paper we present a CNN for RTS game state evaluation that goes beyond commonly used material based evaluations by also taking spatial relations between units into account. We evaluate the CNN's performance by comparing it with various other evaluation functions by means of tournaments played by several state-of-the-art search algorithms. We find that, despite its much slower evaluation speed, on average the CNN based search performs significantly better compared to simpler but faster evaluations. These promising initial results together with recent advances in hierarchical search suggest that dominating human players in RTS games may not be far off.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"35 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77467518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 46
Design influence on player retention: A method based on time varying survival analysis 设计对玩家留存率的影响:基于时变生存分析的方法
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860421
Thibault Allart, G. Levieux, M. Pierfitte, Agathe Guilloux, S. Natkin
This paper proposes a method to help understanding the influence of a game design on player retention. Using Far Cry® 4 data, we illustrate how playtime measures can be used to identify time periods where players are more likely to stop playing. First, we show that a benchmark can easily be performed for every game available on Steam using publicly available data. Then, we introduce how survival analysis can help to model the influence of game variables on player retention. Game environment and player characteristics change over time and tracking systems already store those changes. But existing model which deals with time varying covariate cannot scale on huge datasets produced by video game monitoring. That is why we propose a model that can both deal with time varying covariates and is well suited for big datasets. As a given game variable can have a changing effect over time, we also include time-varying coefficients in our model. We used this survival analysis model to quantify the effect of Far Cry 4 weapons usage on player retention.
本文提出了一种方法来帮助理解游戏设计对玩家留存率的影响。利用《孤岛惊魂4》的数据,我们说明了如何使用游戏时间度量来确定玩家更有可能停止游戏的时间段。首先,我们展示了可以使用公开数据轻松地对Steam上的所有游戏执行基准测试。然后,我们将介绍生存分析如何帮助模拟游戏变量对玩家留存的影响。游戏环境和玩家特征会随着时间而改变,而追踪系统已经储存了这些变化。但是现有的处理时变协变量的模型不能适用于视频游戏监测产生的庞大数据集。这就是为什么我们提出了一个既可以处理时变协变量又非常适合大数据集的模型。因为给定的游戏变量会随着时间的推移而产生变化,所以我们在模型中也包含了时变系数。我们使用这种生存分析模型来量化《孤岛惊魂4》武器使用对玩家留存率的影响。
{"title":"Design influence on player retention: A method based on time varying survival analysis","authors":"Thibault Allart, G. Levieux, M. Pierfitte, Agathe Guilloux, S. Natkin","doi":"10.1109/CIG.2016.7860421","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860421","url":null,"abstract":"This paper proposes a method to help understanding the influence of a game design on player retention. Using Far Cry® 4 data, we illustrate how playtime measures can be used to identify time periods where players are more likely to stop playing. First, we show that a benchmark can easily be performed for every game available on Steam using publicly available data. Then, we introduce how survival analysis can help to model the influence of game variables on player retention. Game environment and player characteristics change over time and tracking systems already store those changes. But existing model which deals with time varying covariate cannot scale on huge datasets produced by video game monitoring. That is why we propose a model that can both deal with time varying covariates and is well suited for big datasets. As a given game variable can have a changing effect over time, we also include time-varying coefficients in our model. We used this survival analysis model to quantify the effect of Far Cry 4 weapons usage on player retention.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"22 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74695744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Breeding a diversity of Super Mario behaviors through interactive evolution 通过互动进化培育出各种各样的超级马里奥行为
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860436
Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi
Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.
在电子游戏中为npc创造控制器是一项具有挑战性且耗时的任务。虽然神经进化(即进化的人工神经网络)等自动学习方法在这种情况下显示出了希望,但它们通常仍然需要精心设计适应度函数。在本文中,我们展示了休闲用户如何通过交互式进化计算(IEC)方法为《超级马里奥兄弟》创建控制器,而无需事先的领域或编程知识。通过从一组候选行为中迭代地选择《超级马里奥》行为,用户能够引导进化到他们喜欢的行为。用户测试的结果表明,参与者能够发展具有非常多样化行为的控制器,这在自动化方法中是困难的。此外,在行驶距离方面,用户进化的控制器与传统的基于健康的方法进化的控制器表现一样好。研究结果表明,IEC是一种可行的替代方案,可以为视频游戏设计多种控制器,并在未来扩展到其他游戏。
{"title":"Breeding a diversity of Super Mario behaviors through interactive evolution","authors":"Patrikk D. Sørensen, Jeppeh M. Olsen, S. Risi","doi":"10.1109/CIG.2016.7860436","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860436","url":null,"abstract":"Creating controllers for NPCs in video games is traditionally a challenging and time consuming task. While automated learning methods such as neuroevolution (i.e. evolving artificial neural networks) have shown promise in this context, they often still require carefully designed fitness functions. In this paper, we show how casual users can create controllers for Super Mario Bros. through an interactive evolutionary computation (IEC) approach, without prior domain or programming knowledge. By iteratively selecting Super Mario behaviors from a set of candidates, users are able to guide evolution towards behaviors they prefer. The result of a user test show that the participants are able to evolve controllers with very diverse behaviors, which would be difficult through automated approaches. Additionally, the user-evolved controllers perform as well as controllers evolved with a traditional fitness-based approach in terms of distance traveled. The results suggest that IEC is a viable alternative in designing diverse controllers for video games that could be extended to other games in the future.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79888564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Recovering visibility and dodging obstacles in pursuit-evasion games 在追逐逃避游戏中恢复能见度和躲避障碍物
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860419
Ahmed Abdelkader
Pursuit-evasion games encompass a wide range of planning problems with a variety of constraints on the motion of agents. We study the visibility-based variant where a pursuer is required to keep an evader in sight, while the evader is assumed to attempt to hide as soon as possible. This is particularly relevant in the context of video games where non-player characters of varying skill levels frequently chase after and attack the player. In this paper, we show that a simple dual formulation of the problem can be integrated into the traditional model to derive optimal strategies that tolerate interruptions in visibility resulting from motion among obstacles. Furthermore, using the enhanced model we propose a competitive procedure to maintain the optimal strategies in a dynamic environment where obstacles can change both shape and location. We prove the correctness of our algorithms and present results for different maps.
追捕-逃避博弈包含了各种各样的计划问题,对代理人的运动有各种各样的约束。我们研究了基于可见性的变体,其中追捕者被要求保持在视线内的逃避者,而逃避者被假设试图尽快隐藏。这在电子游戏中尤其重要,因为不同技能水平的非玩家角色经常追逐和攻击玩家。在本文中,我们证明了该问题的一个简单的对偶公式可以集成到传统模型中,以导出容忍在障碍物之间运动导致的可视性中断的最优策略。此外,利用增强模型,我们提出了一个竞争过程,以在障碍物可以改变形状和位置的动态环境中保持最优策略。我们证明了算法的正确性,并给出了不同地图的结果。
{"title":"Recovering visibility and dodging obstacles in pursuit-evasion games","authors":"Ahmed Abdelkader","doi":"10.1109/CIG.2016.7860419","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860419","url":null,"abstract":"Pursuit-evasion games encompass a wide range of planning problems with a variety of constraints on the motion of agents. We study the visibility-based variant where a pursuer is required to keep an evader in sight, while the evader is assumed to attempt to hide as soon as possible. This is particularly relevant in the context of video games where non-player characters of varying skill levels frequently chase after and attack the player. In this paper, we show that a simple dual formulation of the problem can be integrated into the traditional model to derive optimal strategies that tolerate interruptions in visibility resulting from motion among obstacles. Furthermore, using the enhanced model we propose a competitive procedure to maintain the optimal strategies in a dynamic environment where obstacles can change both shape and location. We prove the correctness of our algorithms and present results for different maps.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"6 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84097165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Position-based reinforcement learning biased MCTS for General Video Game Playing 基于位置的强化学习偏向MCTS的一般视频游戏
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860449
C. Chu, Suguru Ito, Tomohiro Harada, R. Thawonmas
This paper proposes an application of reinforcement learning and position-based features in rollout bias training of Monte-Carlo Tree Search (MCTS) for General Video Game Playing (GVGP). As an improvement on Knowledge-based Fast-Evo MCTS proposed by Perez et al., the proposed method is designated for both the GVG-AI Competition and improvement of the learning mechanism of the original method. The performance of the proposed method is evaluated empirically, using all games from six training sets available in the GVG-AI Framework, and the proposed method achieves better scores than five other existing MCTS-based methods overall.
本文提出了一种强化学习和基于位置的特征在通用视频游戏(GVGP)的蒙特卡罗树搜索(MCTS)的推出偏差训练中的应用。该方法是对Perez等人提出的基于知识的Fast-Evo MCTS的改进,既用于GVG-AI竞争,又改进了原方法的学习机制。使用GVG-AI框架中六个训练集的所有游戏对所提出方法的性能进行了经验评估,所提出的方法总体上比其他五种基于mcts的方法获得了更好的分数。
{"title":"Position-based reinforcement learning biased MCTS for General Video Game Playing","authors":"C. Chu, Suguru Ito, Tomohiro Harada, R. Thawonmas","doi":"10.1109/CIG.2016.7860449","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860449","url":null,"abstract":"This paper proposes an application of reinforcement learning and position-based features in rollout bias training of Monte-Carlo Tree Search (MCTS) for General Video Game Playing (GVGP). As an improvement on Knowledge-based Fast-Evo MCTS proposed by Perez et al., the proposed method is designated for both the GVG-AI Competition and improvement of the learning mechanism of the original method. The performance of the proposed method is evaluated empirically, using all games from six training sets available in the GVG-AI Framework, and the proposed method achieves better scores than five other existing MCTS-based methods overall.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"24 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82979627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Discovering playing patterns: Time series clustering of free-to-play game data 发现游戏模式:免费游戏数据的时间序列聚类
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860442
A. Saas, Anna Guitart, Á. Periáñez
The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.
时间序列数据的分类是所有数据驱动领域面临的共同挑战。然而,对于对未标记的时间顺序数据进行分组的最有效技术,没有达成一致意见。这是因为时间序列模式的成功分类取决于目标和感兴趣的领域,即依赖于应用程序。在本文中,我们将研究免费游戏数据。在该领域中,由于当前移动和web应用程序收集了大量数据,因此聚类相似时间序列信息变得越来越重要。我们评估哪种方法能够准确地聚类手机游戏的时间序列,关注玩家行为数据。我们识别并验证了聚类的几个方面:相似性度量和表示技术,以降低时间序列的高维数。作为稳健性测试,我们比较了两款免费电子游戏的玩家活动的各种时间数据集。通过这些技术,我们提取了与游戏事件评估和游戏商业诊断相关的玩家行为的时间模式。我们的实验提供直观的可视化来验证聚类的结果,并确定最佳的聚类数量。此外,我们评估属于同一组的球员的共同特征。这项研究让我们能够更好地理解玩家动态和流失行为。
{"title":"Discovering playing patterns: Time series clustering of free-to-play game data","authors":"A. Saas, Anna Guitart, Á. Periáñez","doi":"10.1109/CIG.2016.7860442","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860442","url":null,"abstract":"The classification of time series data is a challenge common to all data-driven fields. However, there is no agreement about which are the most efficient techniques to group unlabeled time-ordered data. This is because a successful classification of time series patterns depends on the goal and the domain of interest, i.e. it is application-dependent. In this article, we study free-to-play game data. In this domain, clustering similar time series information is increasingly important due to the large amount of data collected by current mobile and web applications. We evaluate which methods cluster accurately time series of mobile games, focusing on player behavior data. We identify and validate several aspects of the clustering: the similarity measures and the representation techniques to reduce the high dimensionality of time series. As a robustness test, we compare various temporal datasets of player activity from two free-to-play video-games. With these techniques we extract temporal patterns of player behavior relevant for the evaluation of game events and game-business diagnosis. Our experiments provide intuitive visualizations to validate the results of the clustering and to determine the optimal number of clusters. Additionally, we assess the common characteristics of the players belonging to the same group. This study allows us to improve the understanding of player dynamics and churn behavior.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"109 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80945249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
2016 IEEE Conference on Computational Intelligence and Games (CIG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1