{"title":"Improving Temporal Difference game agent control using a dynamic exploration during control learning","authors":"L. Galway, D. Charles, Michaela M. Black","doi":"10.1109/CIG.2009.5286497","DOIUrl":null,"url":null,"abstract":"This paper investigates the use of a dynamically generated exploration rate when using a reinforcement learning-based game agent controller within a dynamic digital game environment. Temporal Difference learning has been employed for the real-time gereration of reactive game agent behaviors within a variation of classic arcade game Pac-Man. Due to the dynamic nature of the game environment initial experiments made use of static, low value for the exploration rate utilized by action selection during learning. However, further experiments were conducted which dynamically generated a value for the exploration rate prior to learning using a genetic algorithm. Results obtained have shown that an improvement in the overall performance of the game agent controller may be achieved when a dynamic exploration rate is used. In particular, if the use of the genetic algorithm is controlled by a measure of the current performance of the game agent, further gains in the overall performance of the game agent may be achieved.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Symposium on Computational Intelligence and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2009.5286497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper investigates the use of a dynamically generated exploration rate when using a reinforcement learning-based game agent controller within a dynamic digital game environment. Temporal Difference learning has been employed for the real-time gereration of reactive game agent behaviors within a variation of classic arcade game Pac-Man. Due to the dynamic nature of the game environment initial experiments made use of static, low value for the exploration rate utilized by action selection during learning. However, further experiments were conducted which dynamically generated a value for the exploration rate prior to learning using a genetic algorithm. Results obtained have shown that an improvement in the overall performance of the game agent controller may be achieved when a dynamic exploration rate is used. In particular, if the use of the genetic algorithm is controlled by a measure of the current performance of the game agent, further gains in the overall performance of the game agent may be achieved.