Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860446
P. R. Williams, Diego Perez Liebana, S. Lucas
This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented.
本文将介绍流行的Ms. Pac-Man vs . Ghost Team比赛的复兴。我们提出了一个具有部分可观察性约束的更新游戏引擎,一个新的多代理系统方法来开发幽灵代理,以及几个示例控制器来简化条目的开发。为幽灵提供了一个受限的通信协议,提供了一个比以前更具挑战性的环境。该竞赛将在2016年IEEE计算智能与游戏大会上首次亮相。一些初步结果显示了部分可观测性的影响和简单通信的好处。
{"title":"Ms. Pac-Man Versus Ghost Team CIG 2016 competition","authors":"P. R. Williams, Diego Perez Liebana, S. Lucas","doi":"10.1109/CIG.2016.7860446","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860446","url":null,"abstract":"This paper introduces the revival of the popular Ms. Pac-Man Versus Ghost Team competition. We present an updated game engine with Partial Observability constraints, a new Multi-Agent Systems approach to developing Ghost agents, and several sample controllers to ease the development of entries. A restricted communication protocol is provided for the Ghosts, providing a more challenging environment than before. The competition will debut at the IEEE Computational Intelligence and Games Conference 2016. Some preliminary results showing the effects of Partial Observability and the benefits of simple communication are also presented.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"21 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87771969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860383
M. D. Waard, Diederik M. Roijers, S. Bakkes
General video game playing is a challenging research area in which the goal is to find one algorithm that can play many games successfully. “Monte Carlo Tree Search” (MCTS) is a popular algorithm that has often been used for this purpose. It incrementally builds a search tree based on observed states after applying actions. However, the MCTS algorithm always plans over actions and does not incorporate any higher level planning, as one would expect from a human player. Furthermore, although many games have similar game dynamics, often no prior knowledge is available to general video game playing algorithms. In this paper, we introduce a new algorithm called “Option Monte Carlo Tree Search” (O-MCTS). It offers general video game knowledge and high level planning in the form of “options”, which are action sequences aimed at achieving a specific subgoal. Additionally, we introduce “Option Learning MCTS” (OL-MCTS), which applies a progressive widening technique to the expected returns of options in order to focus exploration on fruitful parts of the search tree. Our new algorithms are compared to MCTS on a diverse set of twenty-eight games from the general video game AI competition. Our results indicate that by using MCTS's efficient tree searching technique on options, O-MCTS outperforms MCTS on most of the games, especially those in which a certain subgoal has to be reached before the game can be won. Lastly, we show that OL-MCTS improves its performance on specific games by learning expected values for options and moving a bias to higher valued options.
{"title":"Monte Carlo Tree Search with options for general video game playing","authors":"M. D. Waard, Diederik M. Roijers, S. Bakkes","doi":"10.1109/CIG.2016.7860383","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860383","url":null,"abstract":"General video game playing is a challenging research area in which the goal is to find one algorithm that can play many games successfully. “Monte Carlo Tree Search” (MCTS) is a popular algorithm that has often been used for this purpose. It incrementally builds a search tree based on observed states after applying actions. However, the MCTS algorithm always plans over actions and does not incorporate any higher level planning, as one would expect from a human player. Furthermore, although many games have similar game dynamics, often no prior knowledge is available to general video game playing algorithms. In this paper, we introduce a new algorithm called “Option Monte Carlo Tree Search” (O-MCTS). It offers general video game knowledge and high level planning in the form of “options”, which are action sequences aimed at achieving a specific subgoal. Additionally, we introduce “Option Learning MCTS” (OL-MCTS), which applies a progressive widening technique to the expected returns of options in order to focus exploration on fruitful parts of the search tree. Our new algorithms are compared to MCTS on a diverse set of twenty-eight games from the general video game AI competition. Our results indicate that by using MCTS's efficient tree searching technique on options, O-MCTS outperforms MCTS on most of the games, especially those in which a certain subgoal has to be reached before the game can be won. Lastly, we show that OL-MCTS improves its performance on specific games by learning expected values for options and moving a bias to higher valued options.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"122 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88114215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860411
Tobias Graf, M. Platzner
Simulation Balancing is an optimization algorithm to automatically tune the parameters of a playout policy used inside a Monte Carlo Tree Search. The algorithm fits a policy so that the expected result of a policy matches given target values of the training set. Up to now it has been successfully applied to Computer Go on small 9 × 9 boards but failed for larger board sizes like 19 × 19. On these large boards apprenticeship learning, which fits a policy so that it closely follows an expert, continues to be the algorithm of choice. In this paper we introduce several improvements to the original simulation balancing algorithm and test their effectiveness in Computer Go. The proposed additions remove the necessity to generate target values by deep searches, optimize faster and make the algorithm less prone to overfitting. The experiments show that simulation balancing improves the playing strength of a Go program using apprenticeship learning by more than 200 ELO on the large board size 19 × 19.
{"title":"Monte-Carlo simulation balancing revisited","authors":"Tobias Graf, M. Platzner","doi":"10.1109/CIG.2016.7860411","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860411","url":null,"abstract":"Simulation Balancing is an optimization algorithm to automatically tune the parameters of a playout policy used inside a Monte Carlo Tree Search. The algorithm fits a policy so that the expected result of a policy matches given target values of the training set. Up to now it has been successfully applied to Computer Go on small 9 × 9 boards but failed for larger board sizes like 19 × 19. On these large boards apprenticeship learning, which fits a policy so that it closely follows an expert, continues to be the algorithm of choice. In this paper we introduce several improvements to the original simulation balancing algorithm and test their effectiveness in Computer Go. The proposed additions remove the necessity to generate target values by deep searches, optimize faster and make the algorithm less prone to overfitting. The experiments show that simulation balancing improves the playing strength of a Go program using apprenticeship learning by more than 200 ELO on the large board size 19 × 19.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"20 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91305088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860417
D. Aversa, Sebastian Sardiña, S. Vassos
Inventory-Aware Pathfinding is concerned with finding paths while taking into account that picking up items, e.g., keys, allow the character to unlock blocked pathways, e.g., locked doors. In this work we present a pruning method and a preprocessing method that can improve significantly the scalability of such approaches. We apply our methods to the recent approach of Inventory-Driven Jump-Point Search (InvJPS). First, we introduce InvJPS+ that allows to prune large parts of the search space by favoring short detours to pick up items, offering a trade-off between efficiency and optimality. Second, we propose a preprocessing step that allows to decide on runtime which items, e.g., keys, are worth using thus pruning potentially unnecessary items before the search starts. We show results for combinations of the pruning and preprocessing methods illustrating the best choices over various scenarios.
{"title":"Pruning and preprocessing methods for inventory-aware pathfinding","authors":"D. Aversa, Sebastian Sardiña, S. Vassos","doi":"10.1109/CIG.2016.7860417","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860417","url":null,"abstract":"Inventory-Aware Pathfinding is concerned with finding paths while taking into account that picking up items, e.g., keys, allow the character to unlock blocked pathways, e.g., locked doors. In this work we present a pruning method and a preprocessing method that can improve significantly the scalability of such approaches. We apply our methods to the recent approach of Inventory-Driven Jump-Point Search (InvJPS). First, we introduce InvJPS+ that allows to prune large parts of the search space by favoring short detours to pick up items, offering a trade-off between efficiency and optimality. Second, we propose a preprocessing step that allows to decide on runtime which items, e.g., keys, are worth using thus pruning potentially unnecessary items before the search starts. We show results for combinations of the pruning and preprocessing methods illustrating the best choices over various scenarios.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"37 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80896974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860401
Shuo Xu, Clark Verbrugge
Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.
{"title":"Heuristics for sleep and heal in combat","authors":"Shuo Xu, Clark Verbrugge","doi":"10.1109/CIG.2016.7860401","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860401","url":null,"abstract":"Basic attack and defense actions in games are often extended by more powerful actions, including the ability to temporarily incapacitate an enemy through sleep or stun, the ability to restore health through healing, and others. Use of these abilities can have a dramatic impact on combat outcome, and so is typically strongly limited. This implies a non-trivial decision process, and for an AI to effectively use these actions it must consider the potential benefit, opportunity cost, and the complexity of choosing an appropriate target. In this work we develop a formal model to explore optimized use of sleep and heal in small-scale combat scenarios. We consider different heuristics that can guide the use of such actions; experimental work based on Pokémon combats shows that significant improvements are possible over the basic, greedy strategies commonly employed by AI agents. Our work allows for better performance by companion and enemy AIs, and also gives guidance to game designers looking to incorporate advanced combat actions without overly unbalancing combat.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"222 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79290046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860435
Theodosis Georgiou, Y. Demiris
Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.
{"title":"Personalised track design in car racing games","authors":"Theodosis Georgiou, Y. Demiris","doi":"10.1109/CIG.2016.7860435","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860435","url":null,"abstract":"Real-time adaptation of computer games' content to the users' skills and abilities can enhance the player's engagement and immersion. Understanding of the user's potential while playing is of high importance in order to allow the successful procedural generation of user-tailored content. We investigate how player models can be created in car racing games. Our user model uses a combination of data from unobtrusive sensors, while the user is playing a car racing simulator. It extracts features through machine learning techniques, which are then used to comprehend the user's gameplay, by utilising the educational theoretical frameworks of the Concept of Flow and Zone of Proximal Development. The end result is to provide at a next stage a new track that fits to the user needs, which aids both the training of the driver and their engagement in the game. In order to validate that the system is designing personalised tracks, we associated the average performance from 41 users that played the game, with the difficulty factor of the generated track. In addition, the variation in paths of the implemented tracks between users provides a good indicator for the suitability of the system.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"49 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83368588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860425
Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz
Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.
{"title":"An integrated process for game balancing","authors":"Marlene Beyer, Aleksandr Agureikin, Alexander Anokhin, Christoph Laenger, Felix Nolte, Jonas Winterberg, Marcel Renka, Martin Rieger, Nicolas Pflanzl, M. Preuss, Vanessa Volz","doi":"10.1109/CIG.2016.7860425","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860425","url":null,"abstract":"Game balancing is a recurring problem that currently requires a lot of manual work, usually following a game designer's intuition or rules-of-thumb. To what extent can or should the balancing process be automated? We establish a process model that integrates both manual and automated balancing approaches. Artificial agents are employed to automatically assess the desirability of a game. We demonstrate the feasibility of implementing the model and analyze the resulting solutions from its application to a simple video game.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"29 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81337526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860418
Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson
The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.
{"title":"Biometrics and classifier fusion to predict the fun-factor in video gaming","authors":"Andrea Clerico, Cindy Chamberland, Mark Parent, P. Michon, S. Tremblay, T. Falk, Jean-Christophe Gagnon, P. Jackson","doi":"10.1109/CIG.2016.7860418","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860418","url":null,"abstract":"The key to the development of adaptive gameplay is the capability to monitor and predict in real time the players experience (or, herein, fun factor). To achieve this goal, we rely on biometrics and machine learning algorithms to capture a physiological signature that reflects the player's affective state during the game. In this paper, we report research and development effort into the real time monitoring of the player's level of fun during a commercially available video game session using physiological signals. The use of a triple-classifier system allows the transformation of players' physiological responses and their fluctuation into a single yet multifaceted measure of fun, using a non-linear gameplay. Our results suggest that cardiac and respiratory activities provide the best predictive power. Moreover, the level of performance reached when classifying the level of fun (70% accuracy) shows that the use of machine learning approaches with physiological measures can contribute to predicting players experience in an objective manner.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"1 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82107463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860395
Dennis J. N. J. Soemers, M. Winands
Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.
{"title":"Hierarchical Task Network Plan Reuse for video games","authors":"Dennis J. N. J. Soemers, M. Winands","doi":"10.1109/CIG.2016.7860395","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860395","url":null,"abstract":"Hierarchical Task Network Planning is an Automated Planning technique. It is, among other domains, used in Artificial Intelligence for video games. Generated plans cannot always be fully executed, for example due to nondeterminism or imperfect information. In such cases, it is often desirable to re-plan. This is typically done completely from scratch, or done using techniques that require conditions and effects of tasks to be defined in a specific format (typically based on First-Order Logic). In this paper, an approach for Plan Reuse is proposed that manipulates the order in which the search tree is traversed by using a similarity function. It is tested in the SimpleFPS domain, which simulates a First-Person Shooter game, and shown to be capable of finding (optimal) plans with a decreased amount of search effort on average when re-planning for variations of previously solved problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"52 79 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80422430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-09-01DOI: 10.1109/CIG.2016.7860402
G. Greenwood
Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.
{"title":"Altruistic punishment can help resolve tragedy of the commons social dilemmas","authors":"G. Greenwood","doi":"10.1109/CIG.2016.7860402","DOIUrl":"https://doi.org/10.1109/CIG.2016.7860402","url":null,"abstract":"Social dilemmas force individuals to choose between cooperation, which benefits a group, and defection which benefits the individual. The unfortunate outcome in most social dilemmas is mutual defection where nobody benefits. Researchers frequently use mathematical games such as public goods games to help identify circumstances that might improve cooperation levels within a population. Altruistic punishment has shown promise in these games. Many real-world social dilemmas are expressed via a tragedy of the commons metaphor. This paper describes an investigation designed to see if altruistic punishment might work in tragedy of the commons social dilemmas. Simulation results indicate not only does it help resolve a tragedy of the commons but it also effectively deals with the associated first-order and second-order free rider problems.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"41 1","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87516042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}