{"title":"Comparison of Loss Functions for Training of Deep Neural Networks in Shogi","authors":"Hanhua Zhu, Tomoyuki Kaneko","doi":"10.1109/TAAI.2018.00014","DOIUrl":null,"url":null,"abstract":"Evaluation functions are crucial for building strong computer players in two-player games, such as chess, Go, and shogi. Although a linear combination of a large number of features has been popular representation of an evaluation function in shogi, deep neural networks (DNNs) are recently considered to be more promising by the success of AlphaZero in multiple domains, chess, Go, and shogi. This paper shows that three loss functions, loss in comparison training, temporal difference (TD) errors and cross entropy loss in win prediction, are effective for the training of evaluation functions in shogi, presented in deep neural networks. For the training of DNNs in AlphaZero, the main loss function only consists of win prediction, though it is augmented with move prediction for regularization. On the other hand, for training in traditional shogi programs, various losses including loss in comparison training, TD errors, and cross entropy loss in win prediction, have contributed to yield accurate evaluation functions which are the linear combination of a large number of features. Therefore, it is promising to combine these loss functions and to apply them to the training of modern DNNs. In our experiments, we show that training with combinations of loss functions improved the accuracy of evaluation functions represented by DNNs. The performance of trained evaluation functions is tested through top-1 accuracy, 1-1 accuracy, and self-play.","PeriodicalId":211734,"journal":{"name":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAAI.2018.00014","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Evaluation functions are crucial for building strong computer players in two-player games, such as chess, Go, and shogi. Although a linear combination of a large number of features has been popular representation of an evaluation function in shogi, deep neural networks (DNNs) are recently considered to be more promising by the success of AlphaZero in multiple domains, chess, Go, and shogi. This paper shows that three loss functions, loss in comparison training, temporal difference (TD) errors and cross entropy loss in win prediction, are effective for the training of evaluation functions in shogi, presented in deep neural networks. For the training of DNNs in AlphaZero, the main loss function only consists of win prediction, though it is augmented with move prediction for regularization. On the other hand, for training in traditional shogi programs, various losses including loss in comparison training, TD errors, and cross entropy loss in win prediction, have contributed to yield accurate evaluation functions which are the linear combination of a large number of features. Therefore, it is promising to combine these loss functions and to apply them to the training of modern DNNs. In our experiments, we show that training with combinations of loss functions improved the accuracy of evaluation functions represented by DNNs. The performance of trained evaluation functions is tested through top-1 accuracy, 1-1 accuracy, and self-play.