{"title":"第一人称射击游戏的连续强化学习方法","authors":"T. Smith, Jonathan Miles","doi":"10.5176/2010-2283_1.1.02","DOIUrl":null,"url":null,"abstract":"Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours; all from a single initial AI model.","PeriodicalId":91079,"journal":{"name":"GSTF international journal on computing","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2014-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Continuous and Reinforcement Learning Methods for First-Person Shooter Games\",\"authors\":\"T. Smith, Jonathan Miles\",\"doi\":\"10.5176/2010-2283_1.1.02\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours; all from a single initial AI model.\",\"PeriodicalId\":91079,\"journal\":{\"name\":\"GSTF international journal on computing\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-08-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"GSTF international journal on computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5176/2010-2283_1.1.02\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"GSTF international journal on computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5176/2010-2283_1.1.02","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Continuous and Reinforcement Learning Methods for First-Person Shooter Games
Machine learning is now widely studied as the basis for artificial intelligence systems within computer games. Most existing work focuses on methods for learning static expert systems, typically emphasizing candidate selection. This paper extends this work by exploring the use of continuous and reinforcement learning techniques to develop fully-adaptive game AI for first-person shooter bots. We begin by outlining a framework for learning static control models for tanks within the game BZFlag, then extend that framework using continuous learning techniques that allow computer controlled tanks to adapt to the game style of other players, extending overall playability by thwarting attempts to infer the underlying AI. We further show how reinforcement learning can be used to create bots that learn how to play based solely through trial and error, providing game engineers with a practical means to produce large numbers of bots, each with individual intelligences and unique behaviours; all from a single initial AI model.