{"title":"为动作平台电子游戏框架发展通用策略","authors":"Karine da Silva Miras de Araújo, F. O. França","doi":"10.1109/CEC.2016.7743938","DOIUrl":null,"url":null,"abstract":"Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.","PeriodicalId":6344,"journal":{"name":"2009 IEEE Congress on Evolutionary Computation","volume":"38 1","pages":"1303-1310"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Evolving a generalized strategy for an action-platformer video game framework\",\"authors\":\"Karine da Silva Miras de Araújo, F. O. França\",\"doi\":\"10.1109/CEC.2016.7743938\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.\",\"PeriodicalId\":6344,\"journal\":{\"name\":\"2009 IEEE Congress on Evolutionary Computation\",\"volume\":\"38 1\",\"pages\":\"1303-1310\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 IEEE Congress on Evolutionary Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CEC.2016.7743938\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Congress on Evolutionary Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2016.7743938","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evolving a generalized strategy for an action-platformer video game framework
Computational Intelligence in Games comprises many challenges such as the procedural level generation, evolving adversary difficulty and the learning of autonomous playing agents. This last challenge has the objective of creating an autonomous playing agent capable of winning against an opponent on an specific game. Whereas a human being can learn a general winning strategy (i.e., avoid the obstacles and defeat the enemies), learning algorithms have a tendency to overspecialize for a given training scenario (i.e., perform an exact sequence of actions to win), not being able to face variations of the original scenario. To further study this problem, we have applied three variations of Neuroevolution algorithms to the EvoMan game playing learning framework with the main objective of developing an autonomous agent capable of playing in different scenarios than those observed during the training stages. This framework is based on the bosses fights of the well known game called Mega Man. The experiments show that the evolved agents are not capable of winning every challenge imposed to them but they are still capable of learning a generalized behavior.