{"title":"Evolving robust strategies for an abstract real-time strategy game","authors":"David Keaveney, C. O'Riordan","doi":"10.1109/CIG.2009.5286453","DOIUrl":null,"url":null,"abstract":"This paper presents an analysis of evolved strategies for an abstract real-time strategy (RTS) game. The abstract RTS game used is a turn-based strategy game with properties such as parallel turns and imperfect spatial information. The automated player used to learn strategies uses a progressive refinement planning technique to plan its next immediate turn during the game. We describe two types of spatial tactical coordination which we posit are important in the game and define measures for both. A set of ten strategies evolved in a single environment are compared to a second set of ten strategies evolved across a set of environments. The robustness of all of evolved strategies are assessed when playing each other in each environment. Also, the levels of coordination present in both sets of strategies are measured and compared. We wish to show that evolving across multiple spatial environments is necessary to evolve robustness into our strategies.","PeriodicalId":358795,"journal":{"name":"2009 IEEE Symposium on Computational Intelligence and Games","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 IEEE Symposium on Computational Intelligence and Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2009.5286453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18
Abstract
This paper presents an analysis of evolved strategies for an abstract real-time strategy (RTS) game. The abstract RTS game used is a turn-based strategy game with properties such as parallel turns and imperfect spatial information. The automated player used to learn strategies uses a progressive refinement planning technique to plan its next immediate turn during the game. We describe two types of spatial tactical coordination which we posit are important in the game and define measures for both. A set of ten strategies evolved in a single environment are compared to a second set of ten strategies evolved across a set of environments. The robustness of all of evolved strategies are assessed when playing each other in each environment. Also, the levels of coordination present in both sets of strategies are measured and compared. We wish to show that evolving across multiple spatial environments is necessary to evolve robustness into our strategies.