{"title":"改进的LinUCT及其对增量随机特征树的评价","authors":"Yusaku Mandai, Tomoyuki Kaneko","doi":"10.1109/CIG.2016.7860440","DOIUrl":null,"url":null,"abstract":"UCT is a standard method of Monte Carlo tree search (MCTS) algorithms, which have been applied to various domains and have achieved remarkable success. This study proposes a family of Leaf-LinUCT, which are improved LinUCT algorithms incorporating LinUCB into MCTS. LinUCB outperforms UCB1 in contextual multi-armed bandit problems, owing to a kind of online learning with ridge regression. However, due to the minimax structure of game trees, ridge regression in LinUCB does not always work well in the context of tree search. In this paper, we remedy the problem and extend our previous work on LinUCT in two ways: (1) by restricting teacher data for regression to the frontier nodes in a current search tree, and (2) by adjusting the feature vector of each internal node to the weighted mean of the feature vector of the descendant nodes. We also present a new synthetic model, incremental-random-feature tree, by extending the standard incremental random tree model. In our model, each node has a feature vector that represents the characteristics of the corresponding position. The elements of a feature vector in a node are randomly changed from those in its parent node by each move, as the heuristic score of a node is randomly changed by each move in the standard incremental random tree model. The experimental results show that our Leaf-LinUCT outperformed UCT and existing LinUCT algorithms, in the incremental-random-feature treeand a synthetic game studied in [1].","PeriodicalId":6594,"journal":{"name":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","volume":"507 1","pages":"1-8"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improved LinUCT and its evaluation on incremental random-feature tree\",\"authors\":\"Yusaku Mandai, Tomoyuki Kaneko\",\"doi\":\"10.1109/CIG.2016.7860440\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"UCT is a standard method of Monte Carlo tree search (MCTS) algorithms, which have been applied to various domains and have achieved remarkable success. This study proposes a family of Leaf-LinUCT, which are improved LinUCT algorithms incorporating LinUCB into MCTS. LinUCB outperforms UCB1 in contextual multi-armed bandit problems, owing to a kind of online learning with ridge regression. However, due to the minimax structure of game trees, ridge regression in LinUCB does not always work well in the context of tree search. In this paper, we remedy the problem and extend our previous work on LinUCT in two ways: (1) by restricting teacher data for regression to the frontier nodes in a current search tree, and (2) by adjusting the feature vector of each internal node to the weighted mean of the feature vector of the descendant nodes. We also present a new synthetic model, incremental-random-feature tree, by extending the standard incremental random tree model. In our model, each node has a feature vector that represents the characteristics of the corresponding position. The elements of a feature vector in a node are randomly changed from those in its parent node by each move, as the heuristic score of a node is randomly changed by each move in the standard incremental random tree model. The experimental results show that our Leaf-LinUCT outperformed UCT and existing LinUCT algorithms, in the incremental-random-feature treeand a synthetic game studied in [1].\",\"PeriodicalId\":6594,\"journal\":{\"name\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"volume\":\"507 1\",\"pages\":\"1-8\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Conference on Computational Intelligence and Games (CIG)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIG.2016.7860440\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Conference on Computational Intelligence and Games (CIG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIG.2016.7860440","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improved LinUCT and its evaluation on incremental random-feature tree
UCT is a standard method of Monte Carlo tree search (MCTS) algorithms, which have been applied to various domains and have achieved remarkable success. This study proposes a family of Leaf-LinUCT, which are improved LinUCT algorithms incorporating LinUCB into MCTS. LinUCB outperforms UCB1 in contextual multi-armed bandit problems, owing to a kind of online learning with ridge regression. However, due to the minimax structure of game trees, ridge regression in LinUCB does not always work well in the context of tree search. In this paper, we remedy the problem and extend our previous work on LinUCT in two ways: (1) by restricting teacher data for regression to the frontier nodes in a current search tree, and (2) by adjusting the feature vector of each internal node to the weighted mean of the feature vector of the descendant nodes. We also present a new synthetic model, incremental-random-feature tree, by extending the standard incremental random tree model. In our model, each node has a feature vector that represents the characteristics of the corresponding position. The elements of a feature vector in a node are randomly changed from those in its parent node by each move, as the heuristic score of a node is randomly changed by each move in the standard incremental random tree model. The experimental results show that our Leaf-LinUCT outperformed UCT and existing LinUCT algorithms, in the incremental-random-feature treeand a synthetic game studied in [1].