Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2337889
Jorge A. Baier, A. Botea, Daniel Damir Harabor, Carlos Hernández
In moving target search, the objective is to guide a hunter agent to catch a moving prey. Even though in game applications maps are always available at developing time, current approaches to moving target search do not exploit preprocessing to improve search performance. In this paper, we propose MtsCopa, an algorithm that exploits precomputed information in the form of compressed path databases (CPDs), and that is able to guide a hunter agent in both known and partially known terrain. CPDs have previously been used in standard, fixed-target pathfinding but had not been used in the context of moving target search. We evaluated MtsCopa over standard game maps. Our speed results are orders of magnitude better than current state of the art. The time per individual move is improved, which is important in real-time search scenarios, where the time available to make a move is limited. Compared to state of the art, the number of hunter moves is often better and otherwise comparable, since CPDs provide optimal moves along shortest paths. Compared to previous successful methods, such as I-ARA*, our method is simple to understand and implement. In addition, we prove MtsCopa always guides the agent to catch the prey when possible.
{"title":"Fast Algorithm for Catching a Prey Quickly in Known and Partially Known Game Maps","authors":"Jorge A. Baier, A. Botea, Daniel Damir Harabor, Carlos Hernández","doi":"10.1109/TCIAIG.2014.2337889","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2337889","url":null,"abstract":"In moving target search, the objective is to guide a hunter agent to catch a moving prey. Even though in game applications maps are always available at developing time, current approaches to moving target search do not exploit preprocessing to improve search performance. In this paper, we propose MtsCopa, an algorithm that exploits precomputed information in the form of compressed path databases (CPDs), and that is able to guide a hunter agent in both known and partially known terrain. CPDs have previously been used in standard, fixed-target pathfinding but had not been used in the context of moving target search. We evaluated MtsCopa over standard game maps. Our speed results are orders of magnitude better than current state of the art. The time per individual move is improved, which is important in real-time search scenarios, where the time available to make a move is limited. Compared to state of the art, the number of hunter moves is often better and otherwise comparable, since CPDs provide optimal moves along shortest paths. Compared to previous successful methods, such as I-ARA*, our method is simple to understand and implement. In addition, we prove MtsCopa always guides the agent to catch the prey when possible.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"193-199"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2337889","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2363042
F. Glavin, M. G. Madden
In current state-of-the-art commercial first person shooter games, computer controlled bots, also known as nonplayer characters, can often be easily distinguishable from those controlled by humans. Tell-tale signs such as failed navigation, “sixth sense” knowledge of human players' whereabouts and deterministic, scripted behaviors are some of the causes of this. We propose, however, that one of the biggest indicators of nonhumanlike behavior in these games can be found in the weapon shooting capability of the bot. Consistently perfect accuracy and “locking on” to opponents in their visual field from any distance are indicative capabilities of bots that are not found in human players. Traditionally, the bot is handicapped in some way with either a timed reaction delay or a random perturbation to its aim, which doesn't adapt or improve its technique over time. We hypothesize that enabling the bot to learn the skill of shooting through trial and error, in the same way a human player learns, will lead to greater variation in game-play and produce less predictable nonplayer characters. This paper describes a reinforcement learning shooting mechanism for adapting shooting over time based on a dynamic reward signal from the amount of damage caused to opponents.
{"title":"Adaptive Shooting for Bots in First Person Shooter Games Using Reinforcement Learning","authors":"F. Glavin, M. G. Madden","doi":"10.1109/TCIAIG.2014.2363042","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2363042","url":null,"abstract":"In current state-of-the-art commercial first person shooter games, computer controlled bots, also known as nonplayer characters, can often be easily distinguishable from those controlled by humans. Tell-tale signs such as failed navigation, “sixth sense” knowledge of human players' whereabouts and deterministic, scripted behaviors are some of the causes of this. We propose, however, that one of the biggest indicators of nonhumanlike behavior in these games can be found in the weapon shooting capability of the bot. Consistently perfect accuracy and “locking on” to opponents in their visual field from any distance are indicative capabilities of bots that are not found in human players. Traditionally, the bot is handicapped in some way with either a timed reaction delay or a random perturbation to its aim, which doesn't adapt or improve its technique over time. We hypothesize that enabling the bot to learn the skill of shooting through trial and error, in the same way a human player learns, will lead to greater variation in game-play and produce less predictable nonplayer characters. This paper describes a reinforcement learning shooting mechanism for adapting shooting over time based on a dynamic reward signal from the amount of damage caused to opponents.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"180-192"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2363042","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2352255
I. Parberry
While the 15-puzzle has a long and interesting history dating back to the 1870s, it still continues to appear as apps on mobile devices and as minigames inside larger video games. We demonstrate a method for solving the 15-puzzle using only 4.7 MB of tables that on a million random instances was able to find solutions of 65.21 moves on average and 95 moves in the worst case in under a tenth of a millisecond per solution on current desktop computing hardware. These numbers compare favorably to the worst case upper bound of 80 moves and to the greedy algorithm published in 1995, which required 118 moves on average and 195 moves in the worst case.
{"title":"A Memory-Efficient Method for Fast Computation of Short 15-Puzzle Solutions","authors":"I. Parberry","doi":"10.1109/TCIAIG.2014.2352255","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2352255","url":null,"abstract":"While the 15-puzzle has a long and interesting history dating back to the 1870s, it still continues to appear as apps on mobile devices and as minigames inside larger video games. We demonstrate a method for solving the 15-puzzle using only 4.7 MB of tables that on a million random instances was able to find solutions of 65.21 moves on average and 95 moves in the worst case in under a tenth of a millisecond per solution on current desktop computing hardware. These numbers compare favorably to the worst case upper bound of 80 moves and to the greedy algorithm published in 1995, which required 118 moves on average and 195 moves in the worst case.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"200-203"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2352255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2341665
W. Raffe, Fabio Zambetta, Xiaodong Li, Kenneth O. Stanley
In this paper, we propose the strategy of integrating multiple evolutionary processes for personalized procedural content generation (PCG). In this vein, we provide a concrete solution that personalizes game maps in a top-down action-shooter game to suit an individual player's preferences. The need for personalized PCG is steadily growing as the player market diversifies, making it more difficult to design a game that will accommodate a broad range of preferences and skills. In the solution presented here, the geometry of the map and the density of content within that geometry are represented and generated in distinct evolutionary processes, with the player's preferences being captured and utilized through a combination of interactive evolution and a player model formulated as a recommender system. All these components were implemented into a test bed game and experimented on through an unsupervised public experiment. The solution is examined against a plausible random baseline that is comparable to random map generators that have been implemented by independent game developers. Results indicate that the system as a whole is receiving better ratings, that the geometry and content evolutionary processes are exploring more of the solution space, and that the mean prediction accuracy of the player preference models is equivalent to that of existing recommender system literature. Furthermore, we discuss how each of the individual solutions can be used with other game genres and content types.
{"title":"Integrated Approach to Personalized Procedural Map Generation Using Evolutionary Algorithms","authors":"W. Raffe, Fabio Zambetta, Xiaodong Li, Kenneth O. Stanley","doi":"10.1109/TCIAIG.2014.2341665","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2341665","url":null,"abstract":"In this paper, we propose the strategy of integrating multiple evolutionary processes for personalized procedural content generation (PCG). In this vein, we provide a concrete solution that personalizes game maps in a top-down action-shooter game to suit an individual player's preferences. The need for personalized PCG is steadily growing as the player market diversifies, making it more difficult to design a game that will accommodate a broad range of preferences and skills. In the solution presented here, the geometry of the map and the density of content within that geometry are represented and generated in distinct evolutionary processes, with the player's preferences being captured and utilized through a combination of interactive evolution and a player model formulated as a recommender system. All these components were implemented into a test bed game and experimented on through an unsupervised public experiment. The solution is examined against a plausible random baseline that is comparable to random map generators that have been implemented by independent game developers. Results indicate that the system as a whole is receiving better ratings, that the geometry and content evolutionary processes are exploring more of the solution space, and that the mean prediction accuracy of the player preference models is equivalent to that of existing recommender system literature. Furthermore, we discuss how each of the individual solutions can be used with other game genres and content types.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"139-155"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2341665","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2336702
D. Wang, A. Tan
Games are good test-beds to evaluate AI methodologies. In recent years, there has been a vast amount of research dealing with real-time computer games other than the traditional board games or card games. This paper illustrates how we create agents by employing FALCON, a self-organizing neural network that performs reinforcement learning, to play a well-known first-person shooter computer game called Unreal Tournament. Rewards used for learning are either obtained from the game environment or estimated using the temporal difference learning scheme. In this way, the agents are able to acquire proper strategies and discover the effectiveness of different weapons without any guidance or intervention. The experimental results show that our agents learn effectively and appropriately from scratch while playing the game in real-time. Moreover, with the previously learned knowledge retained, our agent is able to adapt to a different opponent in a different map within a relatively short period of time.
{"title":"Creating Autonomous Adaptive Agents in a Real-Time First-Person Shooter Computer Game","authors":"D. Wang, A. Tan","doi":"10.1109/TCIAIG.2014.2336702","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2336702","url":null,"abstract":"Games are good test-beds to evaluate AI methodologies. In recent years, there has been a vast amount of research dealing with real-time computer games other than the traditional board games or card games. This paper illustrates how we create agents by employing FALCON, a self-organizing neural network that performs reinforcement learning, to play a well-known first-person shooter computer game called Unreal Tournament. Rewards used for learning are either obtained from the game environment or estimated using the temporal difference learning scheme. In this way, the agents are able to acquire proper strategies and discover the effectiveness of different weapons without any guidance or intervention. The experimental results show that our agents learn effectively and appropriately from scratch while playing the game in real-time. Moreover, with the previously learned knowledge retained, our agent is able to adapt to a different opponent in a different map within a relatively short period of time.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"123-138"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2336702","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2366555
Hendrik Baier, M. Winands
Monte Carlo tree search (MCTS) is a sampling-based search algorithm that is state of the art in a variety of games. In many domains, its Monte Carlo rollouts of entire games give it a strategic advantage over traditional depth-limited minimax search with αβ pruning. These rollouts can often detect long-term consequences of moves, freeing the programmer from having to capture these consequences in a heuristic evaluation function. But due to its highly selective tree, MCTS runs a higher risk than full-width minimax search of missing individual moves and falling into traps in tactical situations. This paper proposes MCTS-minimax hybrids that integrate shallow minimax searches into the MCTS framework. Three approaches are outlined, using minimax in the selection/expansion phase, the rollout phase, and the backpropagation phase of MCTS. Without assuming domain knowledge in the form of evaluation functions, these hybrid algorithms are a first step towards combining the strategic strength of MCTS and the tactical strength of minimax. We investigate their effectiveness in the test domains of Connect-4, Breakthrough, Othello, and Catch the Lion, and relate this performance to the tacticality of the domains.
蒙特卡罗树搜索(MCTS)是一种基于采样的搜索算法,在各种游戏中都是最先进的。在许多领域,它的蒙特卡洛整个游戏的推出给了它一个战略优势比传统的深度限制极大极小搜索与αβ修剪。这些部署通常可以检测移动的长期结果,从而使程序员不必在启发式评估函数中捕获这些结果。但由于它的高度选择性树,MCTS比全宽度极小极大搜索有更高的风险,会丢失单个动作,并在战术情况下陷入陷阱。本文提出了一种将浅极大极小搜索整合到MCTS框架中的MCTS-minimax混合算法。概述了三种方法,即在MCTS的选择/扩展阶段、推出阶段和反向传播阶段使用极小最大值。这些混合算法不以评估函数的形式假设领域知识,是将MCTS的战略强度和极大极小的战术强度相结合的第一步。我们研究了它们在Connect-4、Breakthrough、Othello和Catch the Lion的测试域中的有效性,并将这种性能与这些域的战术性联系起来。
{"title":"MCTS-Minimax Hybrids","authors":"Hendrik Baier, M. Winands","doi":"10.1109/TCIAIG.2014.2366555","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2366555","url":null,"abstract":"Monte Carlo tree search (MCTS) is a sampling-based search algorithm that is state of the art in a variety of games. In many domains, its Monte Carlo rollouts of entire games give it a strategic advantage over traditional depth-limited minimax search with αβ pruning. These rollouts can often detect long-term consequences of moves, freeing the programmer from having to capture these consequences in a heuristic evaluation function. But due to its highly selective tree, MCTS runs a higher risk than full-width minimax search of missing individual moves and falling into traps in tactical situations. This paper proposes MCTS-minimax hybrids that integrate shallow minimax searches into the MCTS framework. Three approaches are outlined, using minimax in the selection/expansion phase, the rollout phase, and the backpropagation phase of MCTS. Without assuming domain knowledge in the form of evaluation functions, these hybrid algorithms are a first step towards combining the strategic strength of MCTS and the tactical strength of minimax. We investigate their effectiveness in the test domains of Connect-4, Breakthrough, Othello, and Catch the Lion, and relate this performance to the tacticality of the domains.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"167-179"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2366555","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2345398
Jakub Pawlewicz, R. Hayward, Philip Henderson, B. Arneson
For connection games such as Hex or Y or Havannah, finding guaranteed cell-to-cell connection strategies can be a computational bottleneck. In automated players and solvers, sets of such virtual connections are often found with Anshelevich's H-search algorithm: initialize trivial connections, and then repeatedly apply an AND-rule (for combining connections in series) and an OR-rule (for combining connections in parallel). We present FastVC Search, a new algorithm for finding such connections. FastVC Search is more effective than H-search when finding a representative set of connections quickly is more important than finding a larger set of connections slowly. We tested FastVC Search in an alpha-beta player Wolve, a Monte Carlo tree search player MoHex, and a proof number search implementation called Solver. It does not strengthen Wolve, but it significantly strengthens MoHex and Solver.
{"title":"Stronger Virtual Connections in Hex","authors":"Jakub Pawlewicz, R. Hayward, Philip Henderson, B. Arneson","doi":"10.1109/TCIAIG.2014.2345398","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2345398","url":null,"abstract":"For connection games such as Hex or Y or Havannah, finding guaranteed cell-to-cell connection strategies can be a computational bottleneck. In automated players and solvers, sets of such virtual connections are often found with Anshelevich's H-search algorithm: initialize trivial connections, and then repeatedly apply an AND-rule (for combining connections in series) and an OR-rule (for combining connections in parallel). We present FastVC Search, a new algorithm for finding such connections. FastVC Search is more effective than H-search when finding a representative set of connections quickly is more important than finding a larger set of connections slowly. We tested FastVC Search in an alpha-beta player Wolve, a Monte Carlo tree search player MoHex, and a proof number search implementation called Solver. It does not strengthen Wolve, but it significantly strengthens MoHex and Solver.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"156-166"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2345398","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-06-01DOI: 10.1109/TCIAIG.2014.2317832
Jr-Chang Chen, Ting-Yu Lin, Bo-Nian Chen, T. Hsu
Chinese Dark Chess, a nondeterministic two-player game, has not been studied thoroughly. State-of-the-art programs focus on using search algorithms to explore the probability behavior of flipping unrevealed pieces in the opening and the midgame phases. There has been comparatively little research on opening books and endgame databases, especially endgames with nondeterministic flips. In this paper, we propose an equivalence relation that classifies the complex piece relations between the material combinations of each player, and derive a partition for all such material combinations. The technique can be applied to endgame database compression to reduce the number of endgames that need to be constructed. As a result, the computation time and the size of endgame databases can be reduced substantially. Furthermore, understanding the piece relations facilitates the development of a well-designed evaluation function and enhances the search efficiency. In Chinese Dark Chess, the number of nontrivial material combinations comprised of only revealed pieces is 8 497 176, and the number that contain at least one unrevealed piece is 239 980 775 397. Under the proposed method, the compression rates of the above material combinations reach 28.93% and 42.52%, respectively; if the method is applied to endgames comprised of three to eight pieces, the compression rates reach 5.82% and 5.98%, respectively.
{"title":"Equivalence Classes in Chinese Dark Chess Endgames","authors":"Jr-Chang Chen, Ting-Yu Lin, Bo-Nian Chen, T. Hsu","doi":"10.1109/TCIAIG.2014.2317832","DOIUrl":"https://doi.org/10.1109/TCIAIG.2014.2317832","url":null,"abstract":"Chinese Dark Chess, a nondeterministic two-player game, has not been studied thoroughly. State-of-the-art programs focus on using search algorithms to explore the probability behavior of flipping unrevealed pieces in the opening and the midgame phases. There has been comparatively little research on opening books and endgame databases, especially endgames with nondeterministic flips. In this paper, we propose an equivalence relation that classifies the complex piece relations between the material combinations of each player, and derive a partition for all such material combinations. The technique can be applied to endgame database compression to reduce the number of endgames that need to be constructed. As a result, the computation time and the size of endgame databases can be reduced substantially. Furthermore, understanding the piece relations facilitates the development of a well-designed evaluation function and enhances the search efficiency. In Chinese Dark Chess, the number of nontrivial material combinations comprised of only revealed pieces is 8 497 176, and the number that contain at least one unrevealed piece is 239 980 775 397. Under the proposed method, the compression rates of the above material combinations reach 28.93% and 42.52%, respectively; if the method is applied to endgames comprised of three to eight pieces, the compression rates reach 5.82% and 5.98%, respectively.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"109-122"},"PeriodicalIF":0.0,"publicationDate":"2015-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2014.2317832","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-04-21DOI: 10.1109/TCIAIG.2015.2424932
Yun-Gyung Cheong, A. K. Jensen, Elin Rut Gudnadottir, Byung-Chull Bae, J. Togelius
While games are a popular social media for children, there is a real risk that these children are exposed to potential sexual assault. A number of studies have already addressed this issue, however, the data used in previous research did not properly represent the real chats found in multiplayer online games. To address this issue, we obtained real chat data from MovieStarPlanet, a massively multiplayer online game for children. The research described in this paper aimed to detect predatory behaviors in the chats using machine learning methods. In order to achieve a high accuracy on this task, extensive preprocessing was necessary. We describe three different strategies for data selection and preprocessing, and extensively compare the performance of different learning algorithms on the different data sets and features.
{"title":"Detecting Predatory Behavior in Game Chats","authors":"Yun-Gyung Cheong, A. K. Jensen, Elin Rut Gudnadottir, Byung-Chull Bae, J. Togelius","doi":"10.1109/TCIAIG.2015.2424932","DOIUrl":"https://doi.org/10.1109/TCIAIG.2015.2424932","url":null,"abstract":"While games are a popular social media for children, there is a real risk that these children are exposed to potential sexual assault. A number of studies have already addressed this issue, however, the data used in previous research did not properly represent the real chats found in multiplayer online games. To address this issue, we obtained real chat data from MovieStarPlanet, a massively multiplayer online game for children. The research described in this paper aimed to detect predatory behaviors in the chats using machine learning methods. In order to achieve a high accuracy on this task, extensive preprocessing was necessary. We describe three different strategies for data selection and preprocessing, and extensively compare the performance of different learning algorithms on the different data sets and features.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"220-232"},"PeriodicalIF":0.0,"publicationDate":"2015-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2015.2424932","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-03-05DOI: 10.1109/TCIAIG.2015.2410757
Brent E. Harrison, D. Roberts
This paper shows how game analytics can be used to dynamically adapt casual game environments in order to increase session-level retention. Our technique involves using game analytics to create an abstracted game analytic space to make the problem tractable. We then model player retention in this space and use these models to make guided changes to game analytics in order to bring about a targeted distribution of game states that will, in turn, influence player behavior. Experiments performed showed that the adaptive versions of two different casual games, Scrabblesque and Sidequest: The Game, were able to better fit a target distribution of game states while also significantly reducing the quitting rate compared to the nonadaptive version of the games. We showed that these gains were not coming at the cost of player experience by performing a psychometric evaluation in which we measured player intrinsic motivation and engagement with the game environments. In both cases, we showed that players playing the adaptive version of the games reported higher intrinsic motivation and engagement scores than players playing the nonadaptive version of the games.
本文展示了如何使用游戏分析来动态调整休闲游戏环境,从而提高会话级留存率。我们的技术包括使用游戏分析来创建一个抽象的游戏分析空间,使问题易于处理。然后,我们在这一领域对玩家留存率进行建模,并使用这些模型对游戏分析进行指导性改变,从而带来有针对性的游戏状态分布,从而影响玩家行为。实验表明,两款不同的休闲游戏《Scrabblesque》和《Sidequest: the Game》的自适应版本能够更好地适应游戏状态的目标分布,同时与非自适应版本的游戏相比,也显著降低了玩家的退出率。我们通过测量玩家内在动机和对游戏环境的沉浸度的心理评估,证明了这些收益并不是以牺牲玩家体验为代价的。在这两种情况下,我们发现玩自适应版本游戏的玩家比玩非自适应版本游戏的玩家报告了更高的内在动机和粘性分数。
{"title":"An Analytic and Psychometric Evaluation of Dynamic Game Adaption for Increasing Session-Level Retention in Casual Games","authors":"Brent E. Harrison, D. Roberts","doi":"10.1109/TCIAIG.2015.2410757","DOIUrl":"https://doi.org/10.1109/TCIAIG.2015.2410757","url":null,"abstract":"This paper shows how game analytics can be used to dynamically adapt casual game environments in order to increase session-level retention. Our technique involves using game analytics to create an abstracted game analytic space to make the problem tractable. We then model player retention in this space and use these models to make guided changes to game analytics in order to bring about a targeted distribution of game states that will, in turn, influence player behavior. Experiments performed showed that the adaptive versions of two different casual games, Scrabblesque and Sidequest: The Game, were able to better fit a target distribution of game states while also significantly reducing the quitting rate compared to the nonadaptive version of the games. We showed that these gains were not coming at the cost of player experience by performing a psychometric evaluation in which we measured player intrinsic motivation and engagement with the game environments. In both cases, we showed that players playing the adaptive version of the games reported higher intrinsic motivation and engagement scores than players playing the nonadaptive version of the games.","PeriodicalId":49192,"journal":{"name":"IEEE Transactions on Computational Intelligence and AI in Games","volume":"7 1","pages":"207-219"},"PeriodicalIF":0.0,"publicationDate":"2015-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TCIAIG.2015.2410757","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62592940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}