{"title":"End-to-End Autonomous Exploration for Mobile Robots in Unknown Environments through Deep Reinforcement Learning","authors":"Zhi Li, Jinghao Xin, Ning Li","doi":"10.1109/RCAR54675.2022.9872253","DOIUrl":null,"url":null,"abstract":"Autonomous exploration in unknown environments is a significant capability for mobile robots. In this paper, we present an end-to-end autonomous exploration model based on deep reinforcement learning (DRL), which takes the sensor data and a novel exploration map as inputs, and directly outputs the motion control commands of the robot. In contrast to the existing DRL-based exploration methods, the proposed model has no requirements to be combined with the traditional exploration or navigation algorithms, resulting in lower computational complexity. We directly transfer the DRL-based model trained in the training map to four test maps with different sizes and layouts, and the results show that the robot can rapidly adapt to unknown scenes. Besides, a comparison study with RRT-exploration algorithm indicates that the proposed model can reach a higher map exploration rate within less distance and time. Furthermore, we also conduct experiments on the real physical robot to demonstrate the transferability of learned policy from simulation to reality. A video of our experiments in the Gazebo simulator and real world can be found here1","PeriodicalId":304963,"journal":{"name":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Real-time Computing and Robotics (RCAR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RCAR54675.2022.9872253","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Autonomous exploration in unknown environments is a significant capability for mobile robots. In this paper, we present an end-to-end autonomous exploration model based on deep reinforcement learning (DRL), which takes the sensor data and a novel exploration map as inputs, and directly outputs the motion control commands of the robot. In contrast to the existing DRL-based exploration methods, the proposed model has no requirements to be combined with the traditional exploration or navigation algorithms, resulting in lower computational complexity. We directly transfer the DRL-based model trained in the training map to four test maps with different sizes and layouts, and the results show that the robot can rapidly adapt to unknown scenes. Besides, a comparison study with RRT-exploration algorithm indicates that the proposed model can reach a higher map exploration rate within less distance and time. Furthermore, we also conduct experiments on the real physical robot to demonstrate the transferability of learned policy from simulation to reality. A video of our experiments in the Gazebo simulator and real world can be found here1