X. Yan, Jie Huang, Keyan He, Huajie Hong, Dasheng Xu
{"title":"通过深度强化学习进行自主探索","authors":"X. Yan, Jie Huang, Keyan He, Huajie Hong, Dasheng Xu","doi":"10.1108/ir-12-2022-0299","DOIUrl":null,"url":null,"abstract":"\nPurpose\nRobots equipped with LiDAR sensors can continuously perform efficient actions for mapping tasks to gradually build maps. However, with the complexity and scale of the environment increasing, the computation cost is extremely steep. This study aims to propose a hybrid autonomous exploration method that makes full use of LiDAR data, shortens the computation time in the decision-making process and improves efficiency. The experiment proves that this method is feasible.\n\n\nDesign/methodology/approach\nThis study improves the mapping update module and proposes a full-mapping approach that fully exploits the LiDAR data. Under the same hardware configuration conditions, the scope of the mapping is expanded, and the information obtained is increased. In addition, a decision-making module based on reinforcement learning method is proposed, which can select the optimal or near-optimal perceptual action by the learned policy. The decision-making module can shorten the computation time of the decision-making process and improve the efficiency of decision-making.\n\n\nFindings\nThe result shows that the hybrid autonomous exploration method offers good performance, which combines the learn-based policy with traditional frontier-based policy.\n\n\nOriginality/value\nThis study proposes a hybrid autonomous exploration method, which combines the learn-based policy with traditional frontier-based policy. Extensive experiment including real robots is conducted to evaluate the performance of the approach and proves that this method is feasible.\n","PeriodicalId":54987,"journal":{"name":"Industrial Robot-The International Journal of Robotics Research and Application","volume":"79 1","pages":"793-803"},"PeriodicalIF":1.9000,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous exploration through deep reinforcement learning\",\"authors\":\"X. Yan, Jie Huang, Keyan He, Huajie Hong, Dasheng Xu\",\"doi\":\"10.1108/ir-12-2022-0299\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nRobots equipped with LiDAR sensors can continuously perform efficient actions for mapping tasks to gradually build maps. However, with the complexity and scale of the environment increasing, the computation cost is extremely steep. This study aims to propose a hybrid autonomous exploration method that makes full use of LiDAR data, shortens the computation time in the decision-making process and improves efficiency. The experiment proves that this method is feasible.\\n\\n\\nDesign/methodology/approach\\nThis study improves the mapping update module and proposes a full-mapping approach that fully exploits the LiDAR data. Under the same hardware configuration conditions, the scope of the mapping is expanded, and the information obtained is increased. In addition, a decision-making module based on reinforcement learning method is proposed, which can select the optimal or near-optimal perceptual action by the learned policy. The decision-making module can shorten the computation time of the decision-making process and improve the efficiency of decision-making.\\n\\n\\nFindings\\nThe result shows that the hybrid autonomous exploration method offers good performance, which combines the learn-based policy with traditional frontier-based policy.\\n\\n\\nOriginality/value\\nThis study proposes a hybrid autonomous exploration method, which combines the learn-based policy with traditional frontier-based policy. Extensive experiment including real robots is conducted to evaluate the performance of the approach and proves that this method is feasible.\\n\",\"PeriodicalId\":54987,\"journal\":{\"name\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"volume\":\"79 1\",\"pages\":\"793-803\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-04-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Industrial Robot-The International Journal of Robotics Research and Application\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/ir-12-2022-0299\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, INDUSTRIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Industrial Robot-The International Journal of Robotics Research and Application","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/ir-12-2022-0299","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
Autonomous exploration through deep reinforcement learning
Purpose
Robots equipped with LiDAR sensors can continuously perform efficient actions for mapping tasks to gradually build maps. However, with the complexity and scale of the environment increasing, the computation cost is extremely steep. This study aims to propose a hybrid autonomous exploration method that makes full use of LiDAR data, shortens the computation time in the decision-making process and improves efficiency. The experiment proves that this method is feasible.
Design/methodology/approach
This study improves the mapping update module and proposes a full-mapping approach that fully exploits the LiDAR data. Under the same hardware configuration conditions, the scope of the mapping is expanded, and the information obtained is increased. In addition, a decision-making module based on reinforcement learning method is proposed, which can select the optimal or near-optimal perceptual action by the learned policy. The decision-making module can shorten the computation time of the decision-making process and improve the efficiency of decision-making.
Findings
The result shows that the hybrid autonomous exploration method offers good performance, which combines the learn-based policy with traditional frontier-based policy.
Originality/value
This study proposes a hybrid autonomous exploration method, which combines the learn-based policy with traditional frontier-based policy. Extensive experiment including real robots is conducted to evaluate the performance of the approach and proves that this method is feasible.
期刊介绍:
Industrial Robot publishes peer reviewed research articles, technology reviews and specially commissioned case studies. Each issue includes high quality content covering all aspects of robotic technology, and reflecting the most interesting and strategically important research and development activities from around the world.
The journal’s policy of not publishing work that has only been tested in simulation means that only the very best and most practical research articles are included. This ensures that the material that is published has real relevance and value for commercial manufacturing and research organizations. Industrial Robot''s coverage includes, but is not restricted to:
Automatic assembly
Flexible manufacturing
Programming optimisation
Simulation and offline programming
Service robots
Autonomous robots
Swarm intelligence
Humanoid robots
Prosthetics and exoskeletons
Machine intelligence
Military robots
Underwater and aerial robots
Cooperative robots
Flexible grippers and tactile sensing
Robot vision
Teleoperation
Mobile robots
Search and rescue robots
Robot welding
Collision avoidance
Robotic machining
Surgical robots
Call for Papers 2020
AI for Autonomous Unmanned Systems
Agricultural Robot
Brain-Computer Interfaces for Human-Robot Interaction
Cooperative Robots
Robots for Environmental Monitoring
Rehabilitation Robots
Wearable Robotics/Exoskeletons.