{"title":"Swarm Reinforcement Learning Method Based on Hierarchical Q-Learning","authors":"Y. Kuroe, Kenya Takeuchi, Y. Maeda","doi":"10.1109/SSCI50451.2021.9659877","DOIUrl":null,"url":null,"abstract":"In last decades the reinforcement learning method has attracted a great deal of attention and many studies have been done. However, this method is basically a trial-and-error scheme and it takes much computational time to acquire optimal strategies. Furthermore, optimal strategies may not be obtained for large and complicated problems with many states. To resolve these problems we have proposed the swarm reinforcement learning method, which is developed inspired by the multi-point search optimization methods. The Swarm reinforcement learning method has been extensively studied and its effectiveness has been confirmed for several problems, especially for Markov decision processes where the agents can fully observe the states of environments. In many real-world problems, however, the agents cannot fully observe the environments and they are usually partially observable Markov decision processes (POMDPs). The purpose of this paper is to develop a swarm reinforcement learning method which can deal with POMDPs. We propose a swarm reinforcement learning method based on HQ-learning, which is a hierarchical extension of Q-learning. It is shown through experiments that the proposed method can handle POMDPs and possesses higher performance than that of the original HQ-learning.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI50451.2021.9659877","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In last decades the reinforcement learning method has attracted a great deal of attention and many studies have been done. However, this method is basically a trial-and-error scheme and it takes much computational time to acquire optimal strategies. Furthermore, optimal strategies may not be obtained for large and complicated problems with many states. To resolve these problems we have proposed the swarm reinforcement learning method, which is developed inspired by the multi-point search optimization methods. The Swarm reinforcement learning method has been extensively studied and its effectiveness has been confirmed for several problems, especially for Markov decision processes where the agents can fully observe the states of environments. In many real-world problems, however, the agents cannot fully observe the environments and they are usually partially observable Markov decision processes (POMDPs). The purpose of this paper is to develop a swarm reinforcement learning method which can deal with POMDPs. We propose a swarm reinforcement learning method based on HQ-learning, which is a hierarchical extension of Q-learning. It is shown through experiments that the proposed method can handle POMDPs and possesses higher performance than that of the original HQ-learning.