Robot path planning based on deep reinforcement learning

Yinxin Long, Huajin He
{"title":"Robot path planning based on deep reinforcement learning","authors":"Yinxin Long, Huajin He","doi":"10.1109/TOCS50858.2020.9339752","DOIUrl":null,"url":null,"abstract":"Q-learning algorithm based on Markov decision process as a reinforcement learning algorithm can achieve better path planning effect for mobile robot in continuous trial and error. However, Q-learning needs a huge Q-value table, which is easy to cause dimension disaster in decision-making, and it is difficult to get a good path in complex situations. By combining deep learning with reinforcement learning and using the perceptual advantages of deep learning to solve the decision-making problem of reinforcement learning, the deficiency of Q-learning algorithm can be improved. At the same time, the path planning of deep reinforcement learning is simulated by MATLAB, the simulation results show that the deep reinforcement learning can effectively realize the obstacle avoidance of the robot and plan a collision free optimal path for the robot from the starting point to the end point.","PeriodicalId":373862,"journal":{"name":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Conference on Telecommunications, Optics and Computer Science (TOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TOCS50858.2020.9339752","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Q-learning algorithm based on Markov decision process as a reinforcement learning algorithm can achieve better path planning effect for mobile robot in continuous trial and error. However, Q-learning needs a huge Q-value table, which is easy to cause dimension disaster in decision-making, and it is difficult to get a good path in complex situations. By combining deep learning with reinforcement learning and using the perceptual advantages of deep learning to solve the decision-making problem of reinforcement learning, the deficiency of Q-learning algorithm can be improved. At the same time, the path planning of deep reinforcement learning is simulated by MATLAB, the simulation results show that the deep reinforcement learning can effectively realize the obstacle avoidance of the robot and plan a collision free optimal path for the robot from the starting point to the end point.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度强化学习的机器人路径规划
基于马尔可夫决策过程的Q-learning算法作为一种强化学习算法,可以在不断的试错中获得较好的移动机器人路径规划效果。然而,Q-learning需要一个巨大的q值表,在决策时容易造成维度灾难,在复杂的情况下很难得到一个好的路径。将深度学习与强化学习相结合,利用深度学习的感知优势解决强化学习的决策问题,可以改善Q-learning算法的不足。同时,利用MATLAB对深度强化学习的路径规划进行仿真,仿真结果表明,深度强化学习可以有效地实现机器人的避障,并为机器人规划一条从起点到终点无碰撞的最优路径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Research on Fault Diagnosis Method of Power Grid Based on Artificial Intelligence Research on Digital Oil Painting Based on Digital Image Processing Technology Effect of adding seed nuclei on acoustic agglomeration efficiency of natural fog An overview of biological data generation using generative adversarial networks Application of Intelligent Safety Supervision Based on Artificial Intelligence Technology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1