{"title":"基于深度强化学习的无地图导航环境探索","authors":"Nguyen Duc Toan, Kim Gon-Woo","doi":"10.23919/ICCAS52745.2021.9649893","DOIUrl":null,"url":null,"abstract":"In recent years, reinforcement learning has attracted researchers' attention with the AlphaGo event. Especially in autonomous mobile robots, the reinforcement learning approach can be applied to the mapless navigation problem. The Robot can complete the set tasks well and works well in different environments without maps and ready-made path plans. However, for reinforcement learning in general and mapless navigation based on reinforcement learning in particular, exploitation and exploration balance are issues that need to be carefully considered. Specifically, the fact that the agent (Robot) can discover and execute actions in a particular working environment plays a significant role in improving the performance of the reinforcement learning problem. By creating some noise during the convolutional neural network training, the above problem can be solved by some popular approaches today. With outstanding advantages compared to other approaches, the Boltzmann policy approach has been used in our problem. It helps the Robot explore more thoroughly in complex environments, and the policy is also more optimized.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Environment Exploration for Mapless Navigation based on Deep Reinforcement Learning\",\"authors\":\"Nguyen Duc Toan, Kim Gon-Woo\",\"doi\":\"10.23919/ICCAS52745.2021.9649893\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, reinforcement learning has attracted researchers' attention with the AlphaGo event. Especially in autonomous mobile robots, the reinforcement learning approach can be applied to the mapless navigation problem. The Robot can complete the set tasks well and works well in different environments without maps and ready-made path plans. However, for reinforcement learning in general and mapless navigation based on reinforcement learning in particular, exploitation and exploration balance are issues that need to be carefully considered. Specifically, the fact that the agent (Robot) can discover and execute actions in a particular working environment plays a significant role in improving the performance of the reinforcement learning problem. By creating some noise during the convolutional neural network training, the above problem can be solved by some popular approaches today. With outstanding advantages compared to other approaches, the Boltzmann policy approach has been used in our problem. It helps the Robot explore more thoroughly in complex environments, and the policy is also more optimized.\",\"PeriodicalId\":411064,\"journal\":{\"name\":\"2021 21st International Conference on Control, Automation and Systems (ICCAS)\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 21st International Conference on Control, Automation and Systems (ICCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/ICCAS52745.2021.9649893\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ICCAS52745.2021.9649893","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Environment Exploration for Mapless Navigation based on Deep Reinforcement Learning
In recent years, reinforcement learning has attracted researchers' attention with the AlphaGo event. Especially in autonomous mobile robots, the reinforcement learning approach can be applied to the mapless navigation problem. The Robot can complete the set tasks well and works well in different environments without maps and ready-made path plans. However, for reinforcement learning in general and mapless navigation based on reinforcement learning in particular, exploitation and exploration balance are issues that need to be carefully considered. Specifically, the fact that the agent (Robot) can discover and execute actions in a particular working environment plays a significant role in improving the performance of the reinforcement learning problem. By creating some noise during the convolutional neural network training, the above problem can be solved by some popular approaches today. With outstanding advantages compared to other approaches, the Boltzmann policy approach has been used in our problem. It helps the Robot explore more thoroughly in complex environments, and the policy is also more optimized.