{"title":"使用对手模型在社会环境中训练没有经验的人工智能体","authors":"C. Kiourt, Dimitris Kalles","doi":"10.1109/CIG.2016.7860409","DOIUrl":null,"url":null,"abstract":"This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"73 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Using opponent models to train inexperienced synthetic agents in social environments\",\"authors\":\"C. Kiourt, Dimitris Kalles\",\"doi\":\"10.1109/CIG.2016.7860409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.\",\"PeriodicalId\":6594,\"journal\":{\"name\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"73 1\",\"pages\":\"1-4\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2016.7860409\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2016.7860409","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using opponent models to train inexperienced synthetic agents in social environments
This paper investigates the learning progress of inexperienced agents in competitive game playing social environments. We aim to determine the effect of a knowledgeable opponent on a novice learner. For that purpose, we used synthetic agents whose playing behaviors were developed through diverse reinforcement learning set-ups, such as exploitation-vs-exploration trade-off, learning backup and speed of learning, as opponents, and a self-trained agent. The paper concludes by highlighting the effect of diverse knowledgeable synthetic agents in the learning trajectory of an inexperienced agent in competitive multiagent environments.