首页 > 最新文献

2016 IEEE Conference on Computational Intelligence and Games (CIG)最新文献

英文 中文
Ms. Pac-Man Versus Ghost Team CIG 2016 competition 吃豆人小姐对战幽灵队CIG 2016比赛
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860446
P. R. Williams, Diego Perez Liebana, S. Lucas
This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented.
本文将介绍流行的Ms. Pac-Man vs . Ghost Team比赛的复兴。我们提出了一个具有部分可观察性约束的更新游戏引擎,一个新的多代理系统方法来开发幽灵代理,以及几个示例控制器来简化条目的开发。为幽灵提供了一个受限的通信协议,提供了一个比以前更具挑战性的环境。该竞赛将在2016年IEEE计算智能与游戏大会上首次亮相。一些初步结果显示了部分可观测性的影响和简单通信的好处。
{"title":"Ms. Pac-Man Versus Ghost Team CIG 2016 competition","authors":"P. R. Williams, Diego Perez Liebana, S. Lucas","doi":"10.1109/CIG.2016.7860446","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860446","url":null,"abstract":"This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"21 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87771969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Monte Carlo Tree Search with options for general video game playing 蒙特卡洛树搜索与选项一般视频游戏玩
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860383
M. D. Waard, Diederik M. Roijers, S. Bakkes
General video game playing is a challenging research area in which the goal is to find one algorithm that can play many games successfully. “Monte Carlo Tree Search” (MCTS) is a popular algorithm that has often been used for this purpose. It incrementally builds a search tree based on observed states after applying actions. However, the MCTS algorithm always plans over actions and does not incorporate any higher level planning, as one would expect from a human player. Furthermore, although many games have similar game dynamics, often no prior knowledge is available to general video game playing algorithms. In this paper, we introduce a new algorithm called “Option Monte Carlo Tree Search” (O-MCTS). It offers general video game knowledge and high level planning in the form of “options”, which are action sequences aimed at achieving a specific subgoal. Additionally, we introduce “Option Learning MCTS” (OL-MCTS), which applies a progressive widening technique to the expected returns of options in order to focus exploration on fruitful parts of the search tree. Our new algorithms are compared to MCTS on a diverse set of twenty-eight games from the general video game AI competition. Our results indicate that by using MCTS's efficient tree searching technique on options, O-MCTS outperforms MCTS on most of the games, especially those in which a certain subgoal has to be reached before the game can be won. Lastly, we show that OL-MCTS improves its performance on specific games by learning expected values for options and moving a bias to higher valued options.
一般的电子游戏是一个具有挑战性的研究领域,其目标是找到一种能够成功玩多款游戏的算法。“蒙特卡罗树搜索”(MCTS)是一种常用的算法,经常用于此目的。在应用操作后,它基于观察到的状态增量地构建搜索树。然而,MCTS算法总是对行动进行计划,并且不包含任何更高级别的计划,就像人们对人类玩家所期望的那样。此外,尽管许多游戏都具有相似的游戏动态,但一般的电子游戏玩法算法通常不具备先验知识。在本文中,我们介绍了一个新的算法,称为“选项蒙特卡罗树搜索”(O-MCTS)。它以“选项”的形式提供了一般的电子游戏知识和高级规划,即旨在实现特定子目标的动作序列。此外,我们引入了“期权学习MCTS”(OL-MCTS),它对期权的预期回报应用了渐进扩展技术,以便将探索集中在搜索树的有效部分。我们的新算法与MCTS在28个不同的游戏中进行了比较,这些游戏来自一般的电子游戏人工智能比赛。我们的研究结果表明,通过使用MCTS在选项上的高效树搜索技术,O-MCTS在大多数博弈中都优于MCTS,特别是在那些必须达到某个子目标才能获胜的博弈中。最后,我们表明OL-MCTS通过学习选项的期望值和将偏差移动到更高价值的选项来提高其在特定游戏中的性能。
{"title":"Monte Carlo Tree Search with options for general video game playing","authors":"M. D. Waard, Diederik M. Roijers, S. Bakkes","doi":"10.1109/CIG.2016.7860383","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860383","url":null,"abstract":"General video game playing is a challenging research area in which the goal is to find one algorithm that can play many games successfully. “Monte Carlo Tree Search” (MCTS) is a popular algorithm that has often been used for this purpose. It incrementally builds a search tree based on observed states after applying actions. However, the MCTS algorithm always plans over actions and does not incorporate any higher level planning, as one would expect from a human player. Furthermore, although many games have similar game dynamics, often no prior knowledge is available to general video game playing algorithms. In this paper, we introduce a new algorithm called “Option Monte Carlo Tree Search” (O-MCTS). It offers general video game knowledge and high level planning in the form of “options”, which are action sequences aimed at achieving a specific subgoal. Additionally, we introduce “Option Learning MCTS” (OL-MCTS), which applies a progressive widening technique to the expected returns of options in order to focus exploration on fruitful parts of the search tree. Our new algorithms are compared to MCTS on a diverse set of twenty-eight games from the general video game AI competition. Our results indicate that by using MCTS's efficient tree searching technique on options, O-MCTS outperforms MCTS on most of the games, especially those in which a certain subgoal has to be reached before the game can be won. Lastly, we show that OL-MCTS improves its performance on specific games by learning expected values for options and moving a bias to higher valued options.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"122 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88114215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Monte-Carlo simulation balancing revisited 蒙特卡罗模拟平衡重新审视
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860411
Tobias Graf, M. Platzner
Simulation Balancing is an optimization algorithm to automatically tune the parameters of a playout policy used inside a Monte Carlo Tree Search. The algorithm fits a policy so that the expected result of a policy matches given target values of the training set. Up to now it has been successfully applied to Computer Go on small 9 × 9 boards but failed for larger board sizes like 19 × 19. On these large boards apprenticeship learning, which fits a policy so that it closely follows an expert, continues to be the algorithm of choice. In this paper we introduce several improvements to the original simulation balancing algorithm and test their effectiveness in Computer Go. The proposed additions remove the necessity to generate target values by deep searches, optimize faster and make the algorithm less prone to overfitting. The experiments show that simulation balancing improves the playing strength of a Go program using apprenticeship learning by more than 200 ELO on the large board size 19 × 19.
仿真平衡是一种优化算法,用于自动调整蒙特卡罗树搜索中使用的播放策略参数。该算法拟合策略,使策略的预期结果与给定训练集的目标值相匹配。到目前为止,它已经成功地应用于计算机围棋小9 × 9板,但失败的较大的棋盘尺寸,如19 × 19。在这些大型董事会中,学徒制学习仍然是首选算法,它符合一项政策,因此它密切跟随专家。本文对原有的仿真平衡算法进行了改进,并对其在计算机围棋中的有效性进行了测试。所提出的加法消除了通过深度搜索生成目标值的必要性,优化速度更快,并且使算法不容易出现过拟合。实验表明,在19 × 19的大棋盘上,模拟平衡使采用学徒学习的围棋程序的下棋强度提高了200个ELO以上。
{"title":"Monte-Carlo simulation balancing revisited","authors":"Tobias Graf, M. Platzner","doi":"10.1109/CIG.2016.7860411","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860411","url":null,"abstract":"Simulation Balancing is an optimization algorithm to automatically tune the parameters of a playout policy used inside a Monte Carlo Tree Search. The algorithm fits a policy so that the expected result of a policy matches given target values of the training set. Up to now it has been successfully applied to Computer Go on small 9 × 9 boards but failed for larger board sizes like 19 × 19. On these large boards apprenticeship learning, which fits a policy so that it closely follows an expert, continues to be the algorithm of choice. In this paper we introduce several improvements to the original simulation balancing algorithm and test their effectiveness in Computer Go. The proposed additions remove the necessity to generate target values by deep searches, optimize faster and make the algorithm less prone to overfitting. The experiments show that simulation balancing improves the playing strength of a Go program using apprenticeship learning by more than 200 ELO on the large board size 19 × 19.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"20 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91305088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Pruning and preprocessing methods for inventory-aware pathfinding 库存感知寻路的修剪和预处理方法
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860417
D. Aversa, Sebastian Sardiña, S. Vassos
Inventory-Aware Pathfinding is concerned with finding paths while taking into account that picking up items, e.g., keys, allow the character to unlock blocked pathways, e.g., locked doors. In this work we present a pruning method and a preprocessing method that can improve significantly the scalability of such approaches. We apply our methods to the recent approach of Inventory-Driven Jump-Point Search (InvJPS). First, we introduce InvJPS+ that allows to prune large parts of the search space by favoring short detours to pick up items, offering a trade-off between efficiency and optimality. Second, we propose a preprocessing step that allows to decide on runtime which items, e.g., keys, are worth using thus pruning potentially unnecessary items before the search starts. We show results for combinations of the pruning and preprocessing methods illustrating the best choices over various scenarios.
有库存意识的寻径是指在寻找路径的同时考虑拾取道具(如钥匙),允许角色打开阻塞的路径(如锁着的门)。在这项工作中,我们提出了一种修剪方法和一种预处理方法,可以显著提高这种方法的可扩展性。我们将我们的方法应用于最近的库存驱动跳跃点搜索(InvJPS)方法。首先,我们介绍了InvJPS+,它允许通过选择较短的弯路来获取条目,从而减少大部分搜索空间,在效率和最优性之间进行权衡。其次,我们提出了一个预处理步骤,允许在运行时决定哪些项(例如键)值得使用,从而在搜索开始之前修剪可能不必要的项。我们展示了修剪和预处理方法组合的结果,说明了在各种情况下的最佳选择。
{"title":"Pruning and preprocessing methods for inventory-aware pathfinding","authors":"D. Aversa, Sebastian Sardiña, S. Vassos","doi":"10.1109/CIG.2016.7860417","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860417","url":null,"abstract":"Inventory-Aware Pathfinding is concerned with finding paths while taking into account that picking up items, e.g., keys, allow the character to unlock blocked pathways, e.g., locked doors. In this work we present a pruning method and a preprocessing method that can improve significantly the scalability of such approaches. We apply our methods to the recent approach of Inventory-Driven Jump-Point Search (InvJPS). First, we introduce InvJPS+ that allows to prune large parts of the search space by favoring short detours to pick up items, offering a trade-off between efficiency and optimality. Second, we propose a preprocessing step that allows to decide on runtime which items, e.g., keys, are worth using thus pruning potentially unnecessary items before the search starts. We show results for combinations of the pruning and preprocessing methods illustrating the best choices over various scenarios.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"37 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80896974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristics for sleep and heal in combat 启发式睡眠和治疗在战斗中
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860401
Shuo Xu, Clark Verbrugge
Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.
游戏中的基本攻击和防御行动通常会被更强大的行动所扩展,包括通过睡眠或昏迷暂时使敌人丧失能力,通过治疗恢复生命值等。这些能力的使用会对战斗结果产生巨大的影响,所以通常是非常有限的。这意味着一个重要的决策过程,并且为了让AI有效地使用这些行动,它必须考虑潜在的利益、机会成本和选择合适目标的复杂性。在这项工作中,我们开发了一个正式的模型来探索在小规模战斗场景中优化使用睡眠和愈合。我们考虑不同的启发式来指导这些行动的使用;基于pok mon战斗的实验工作表明,与人工智能代理通常采用的基本贪婪策略相比,有可能取得重大改进。我们的工作让同伴和敌人的ai有了更好的表现,同时也为那些希望在不过度失衡的情况下融入高级战斗行动的游戏设计师提供了指导。
{"title":"Heuristics for sleep and heal in combat","authors":"Shuo Xu, Clark Verbrugge","doi":"10.1109/CIG.2016.7860401","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860401","url":null,"abstract":"Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"222 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79290046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Personalised track design in car racing games 赛车游戏中的个性化赛道设计
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860435
Theodosis Georgiou, Y. Demiris
Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.
将电脑游戏的内容根据用户的技能和能力进行实时调整,可以增强玩家的粘性和沉浸感。理解用户在玩游戏时的潜力对于成功生成用户定制内容非常重要。我们将研究如何在赛车游戏中创建玩家模型。我们的用户模型使用来自不显眼的传感器的数据组合,而用户正在玩赛车模拟器。它通过机器学习技术提取特征,然后利用流概念和最近发展区的教育理论框架来理解用户的游戏玩法。最终结果是在下一阶段提供符合用户需求的新赛道,这既有助于驾驶员的培训,也有助于他们在游戏中的参与度。为了验证系统是否在设计个性化的曲目,我们将41名玩家的平均表现与生成曲目的难度系数联系起来。此外,在用户之间实现的轨道路径的变化为系统的适用性提供了一个很好的指标。
{"title":"Personalised track design in car racing games","authors":"Theodosis Georgiou, Y. Demiris","doi":"10.1109/CIG.2016.7860435","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860435","url":null,"abstract":"Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"49 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83368588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An integrated process for game balancing 游戏平衡的综合过程
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860425
Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz
Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.
游戏平衡是一个反复出现的问题,目前需要大量的手工工作,通常是遵循游戏设计师的直觉或经验法则。平衡过程在多大程度上可以或应该自动化?我们建立了一个集成了手动和自动平衡方法的过程模型。人工代理被用来自动评估游戏的可取性。我们演示了实现该模型的可行性,并分析了将其应用于简单视频游戏的结果解决方案。
{"title":"An integrated process for game balancing","authors":"Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz","doi":"10.1109/CIG.2016.7860425","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860425","url":null,"abstract":"Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"29 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81337526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Biometrics and classifier fusion to predict the fun-factor in video gaming 生物识别和分类器融合预测电子游戏中的乐趣因素
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860418
Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson
The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.
开发适应性玩法的关键在于实时监控和预测玩家体验的能力(游戏邦注:也就是有趣因素)。为了实现这一目标,我们依靠生物识别技术和机器学习算法来捕捉反映玩家在游戏过程中的情感状态的生理特征。在本文中,我们报告了利用生理信号实时监测玩家在商业电子游戏会话中的乐趣水平的研究和开发工作。三重分类系统的使用允许将玩家的生理反应及其波动转化为单一但多方面的乐趣衡量标准,并使用非线性玩法。我们的研究结果表明,心脏和呼吸活动提供了最好的预测能力。此外,在对乐趣等级进行分类时所达到的表现水平(准确率为70%)表明,结合生理测量的机器学习方法可以以客观的方式预测玩家体验。
{"title":"Biometrics and classifier fusion to predict the fun-factor in video gaming","authors":"Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson","doi":"10.1109/CIG.2016.7860418","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860418","url":null,"abstract":"The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82107463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Hierarchical Task Network Plan Reuse for video games 电子游戏的分层任务网络计划重用
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860395
Dennis J. N. J. Soemers, M. Winands
Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.
分层任务网络规划是一种自动化规划技术。在其他领域中,它被用于电子游戏的人工智能。生成的计划不能总是完全执行,例如由于不确定性或不完善的信息。在这种情况下,通常需要重新规划。这通常是完全从零开始完成的,或者使用要求以特定格式(通常基于一阶逻辑)定义任务的条件和效果的技术来完成。本文提出了一种利用相似度函数控制搜索树遍历顺序的计划重用方法。它在模拟第一人称射击游戏的SimpleFPS领域中进行了测试,结果表明,当重新规划之前解决的问题的变化时,它能够以较少的平均搜索努力找到(最佳)计划。
{"title":"Hierarchical Task Network Plan Reuse for video games","authors":"Dennis J. N. J. Soemers, M. Winands","doi":"10.1109/CIG.2016.7860395","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860395","url":null,"abstract":"Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"52 79 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80422430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Altruistic punishment can help resolve tragedy of the commons social dilemmas 利他惩罚有助于解决公地悲剧的社会困境
Pub Date : 2016-09-01 DOI: 10.1109/CIG.2016.7860402
G. Greenwood
Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.
社会困境迫使个人在合作和背叛之间做出选择,前者有利于群体,后者有利于个人。在大多数社会困境中,不幸的结果是相互背叛,没有人受益。研究人员经常使用公共产品游戏等数学游戏来帮助确定可能提高群体内合作水平的情况。在这些游戏中,利他的惩罚表现出了希望。许多现实世界的社会困境都是通过公地悲剧的隐喻来表达的。本文描述了一项调查,旨在了解利他惩罚是否可能在公地社会困境的悲剧中起作用。仿真结果表明,该方法不仅有助于解决公地悲剧,而且有效地解决了相关的一阶和二阶搭便车问题。
{"title":"Altruistic punishment can help resolve tragedy of the commons social dilemmas","authors":"G. Greenwood","doi":"10.1109/CIG.2016.7860402","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860402","url":null,"abstract":"Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"41 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87516042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2016 IEEE Conference on Computational Intelligence and Games (CIG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1