{"title":"基于连续状态空间强化学习的移动机器人行为获取","authors":"T. Arai, Y. Toda, N. Kubota","doi":"10.1109/ICMLC48188.2019.8949181","DOIUrl":null,"url":null,"abstract":"In the application of Reinforcement Learning to real tasks, the construction of state space is a significant problem. In order to use in the real-world environment, we need to deal with the problem of continuous information. Therefore, we proposed a method of the construction of state space using Growing Neural Gas. In our method, the agent constructs a state space model from its own experience autonomously. Furthermore, it can reconstruct the suitable state space model to adapt the complication of the environment. Through the experiments, we showed that Reinforcement Learning could be performed efficiently by successively updating the state space model according to the environment.","PeriodicalId":221349,"journal":{"name":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Behavior Acquisition on a Mobile Robot Using Reinforcement Learning With Continuous State Space\",\"authors\":\"T. Arai, Y. Toda, N. Kubota\",\"doi\":\"10.1109/ICMLC48188.2019.8949181\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the application of Reinforcement Learning to real tasks, the construction of state space is a significant problem. In order to use in the real-world environment, we need to deal with the problem of continuous information. Therefore, we proposed a method of the construction of state space using Growing Neural Gas. In our method, the agent constructs a state space model from its own experience autonomously. Furthermore, it can reconstruct the suitable state space model to adapt the complication of the environment. Through the experiments, we showed that Reinforcement Learning could be performed efficiently by successively updating the state space model according to the environment.\",\"PeriodicalId\":221349,\"journal\":{\"name\":\"2019 International Conference on Machine Learning and Cybernetics (ICMLC)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Machine Learning and Cybernetics (ICMLC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMLC48188.2019.8949181\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Machine Learning and Cybernetics (ICMLC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLC48188.2019.8949181","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Behavior Acquisition on a Mobile Robot Using Reinforcement Learning With Continuous State Space
In the application of Reinforcement Learning to real tasks, the construction of state space is a significant problem. In order to use in the real-world environment, we need to deal with the problem of continuous information. Therefore, we proposed a method of the construction of state space using Growing Neural Gas. In our method, the agent constructs a state space model from its own experience autonomously. Furthermore, it can reconstruct the suitable state space model to adapt the complication of the environment. Through the experiments, we showed that Reinforcement Learning could be performed efficiently by successively updating the state space model according to the environment.