Nitchakun Kantasewi, S. Marukatat, S. Thainimit, Okumura Manabu
{"title":"Multi Q-Table Q-Learning","authors":"Nitchakun Kantasewi, S. Marukatat, S. Thainimit, Okumura Manabu","doi":"10.1109/ICTEMSYS.2019.8695963","DOIUrl":null,"url":null,"abstract":"Q-learning is a popular reinforcement learning technique for solving shortest path (STP) problem. In a maze with multiple sub-tasks such as collecting treasures and avoiding traps, it has been observed that the Q-learning converges to the optimal path. However, the sum of obtained rewards along the path in average is moderate. This paper proposes Multi-Q-Table Q-learning to address a problem of low average sum of rewards. The proposed method constructs a new Q-table whenever a sub-goal is reached. This modification let an agent to learn that the sub-reward is already collect and it can be obtained only once. Our experimental results show that a modified algorithm can achieve an optimal answer to collect all treasures (positive rewards), avoid pit and reach goal with the shortest path. With a small size of maze, the proposed algorithm uses the larger amount of time to achieved optimal solution compared to the conventional Q-learning.","PeriodicalId":220516,"journal":{"name":"2019 10th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 10th International Conference of Information and Communication Technology for Embedded Systems (IC-ICTES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTEMSYS.2019.8695963","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Q-learning is a popular reinforcement learning technique for solving shortest path (STP) problem. In a maze with multiple sub-tasks such as collecting treasures and avoiding traps, it has been observed that the Q-learning converges to the optimal path. However, the sum of obtained rewards along the path in average is moderate. This paper proposes Multi-Q-Table Q-learning to address a problem of low average sum of rewards. The proposed method constructs a new Q-table whenever a sub-goal is reached. This modification let an agent to learn that the sub-reward is already collect and it can be obtained only once. Our experimental results show that a modified algorithm can achieve an optimal answer to collect all treasures (positive rewards), avoid pit and reach goal with the shortest path. With a small size of maze, the proposed algorithm uses the larger amount of time to achieved optimal solution compared to the conventional Q-learning.